datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
Rabe3/Egy-Conv-Unsloth | Rabe3 | 2025-04-30T21:47:43Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:47:40Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: conversations
struct:
- name: content
sequence: string
- name: role
sequence: string
splits:
- name: train
num_bytes: 5136450
num_examples: 10000
download_size: 151379
dataset_size: 5136450
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SarangChouguley/TICQA | SarangChouguley | 2025-04-30T21:41:55Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:41:00Z | null | ---
dataset_info:
- config_name: safety_warning_recognition
features:
- name: Filename
dtype: string
- name: ground_truth
dtype: string
- name: extra_info
dtype: string
- name: full_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3222145.0
num_examples: 100
download_size: 3192348
dataset_size: 3222145.0
- config_name: tools_and_components_identification
features:
- name: Filename
dtype: string
- name: ground_truth
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: full_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 9714061.0
num_examples: 100
download_size: 5520646
dataset_size: 9714061.0
- config_name: visual_sequence_interpretation
features:
- name: Filename
dtype: string
- name: ground_truth
dtype: string
- name: options
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: full_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 12082256.0
num_examples: 50
download_size: 12034496
dataset_size: 12082256.0
configs:
- config_name: safety_warning_recognition
data_files:
- split: train
path: safety_warning_recognition/train-*
- config_name: tools_and_components_identification
data_files:
- split: train
path: tools_and_components_identification/train-*
- config_name: visual_sequence_interpretation
data_files:
- split: train
path: visual_sequence_interpretation/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_att_iter0_att40_sol5_relabeled_dedup_assertion_errors | cchoi1 | 2025-04-30T21:37:12Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:37:10Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 12585270
num_examples: 1178
download_size: 2667432
dataset_size: 12585270
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm2_gen9_run0_W_doc1000_synt64_tot128_lr5em5_p1k_SYNLAST | dgambettaphd | 2025-04-30T21:21:19Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:21:12Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 6594612
num_examples: 13000
download_size: 3560880
dataset_size: 6594612
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_2_for_gen_5 | HungVu2003 | 2025-04-30T21:13:16Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:13:14Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3877831
num_examples: 12500
download_size: 1440640
dataset_size: 3877831
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
willx0909/shelf_robot_lerobot | willx0909 | 2025-04-30T21:08:33Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"libero",
"easo",
"rlds"
] | [
"robotics"
] | 2025-04-30T21:00:19Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- libero
- easo
- rlds
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "easo",
"total_episodes": 304,
"total_frames": 70811,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:304"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.joint_angles": {
"dtype": "float32",
"shape": [
7
]
},
"observation.eef_pose": {
"dtype": "float32",
"shape": [
6
]
},
"observation.target_eef_pose": {
"dtype": "float32",
"shape": [
6
]
},
"actions": {
"dtype": "float32",
"shape": [
8
]
},
"observation.images.forward_diagonal_camera_right": {
"dtype": "image",
"shape": [
480,
640,
3
]
},
"observation.images.hand_camera_right": {
"dtype": "image",
"shape": [
480,
640,
3
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
slavekroller/HTAreasoning-methodology-reasoning-trajectories | slavekroller | 2025-04-30T20:55:23Z | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"reasoning-datasets-competition"
] | [] | 2025-04-30T19:50:42Z | null | ---
license: cc-by-4.0
tags:
- reasoning-datasets-competition
---
# HTAreasoning Datasets: Can Al Value Life?
## HTAreasoning-methodology-reasoning-trajectories Dataset card
Part of HTAreasoning. See https://huggingface.co/datasets/slavekroller/HTAreasoning-results.
### Dataset Fields
| Field Name | Definition |
| :------------------------------------------------- | :--------- |
| `link` | link to source documents, containing full descriptions of an estimation model being assessed as well as the reasoning trajectories |
| `methodology_choice_reservation` | severity of a methodological reservation made by the assessment committee |
| `methodology_choice_class` | scope, within which a methodological choice was made by the submitter |
| `methodology_choice_submitter_reasoning` | extracted reasoning trajectory of the submittor |
| `methodology_choice_assessor_reasoning` | extracted reasoning trajectory of the assessment committee |
| `methodology_choice_assessor_reasoning_summary_AI-generated-Gemini` | AI-generated comment - not extracted directly from the source documents - augments the extracted dataset by providing a one-line summary of the methodological reservation |
### Citation
HTAreasoning-methodology-reasoning-trajectories. HTAreasoning Datasets (2025). Slavek Roller. |
PTPReasoning/PubMedQA | PTPReasoning | 2025-04-30T20:44:19Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:35:32Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 1604644
num_examples: 1000
download_size: 810463
dataset_size: 1604644
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
reference: https://github.com/FreedomIntelligence/HuatuoGPT-o1/blob/main/evaluation/data/eval_data.json
|
ai2-adapt-dev/tulu-3-sft-57k-criteria-gpt4o-classified-rewritten-math | ai2-adapt-dev | 2025-04-30T20:41:46Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:41:40Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: dataset
dtype: string
- name: ground_truth
sequence: string
- name: openai_response
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 260779124
num_examples: 57323
download_size: 144162436
dataset_size: 260779124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TozluLider6393/yeniyeni | TozluLider6393 | 2025-04-30T20:38:35Z | 0 | 0 | [
"task_categories:text2text-generation",
"language:tr",
"license:bsl-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"text2text-generation"
] | 2025-04-30T20:37:31Z | null | ---
license: bsl-1.0
task_categories:
- text2text-generation
language:
- tr
tags:
- code
pretty_name: furkan2
size_categories:
- 1K<n<10K
--- |
palli23/spjallromur-2x-gold | palli23 | 2025-04-30T20:31:40Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:31:35Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
- name: speaker
dtype: string
- name: session
dtype: string
splits:
- name: train
num_bytes: 21493489.0
num_examples: 202
download_size: 21142619
dataset_size: 21493489.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hypaai/nv_yo_0_4_wspr | hypaai | 2025-04-30T20:29:12Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:16:02Z | null | ---
dataset_info:
features:
- name: input_features
sequence:
sequence:
sequence: float32
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 49803708400.0
num_examples: 51846
download_size: 6677100085
dataset_size: 49803708400.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
INFERLab/BLUED | INFERLab | 2025-04-30T20:19:00Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T20:00:25Z | null | ---
license: apache-2.0
---
---
tags:
- energy disaggregation
- non-intrusive load monitoring
- time series
- electrical load monitoring
license: unknown # License information was not explicitly stated in the paper, might need clarification.
language:
- en
pretty_name: BLUED (Building-Level fUlly-labeled dataset for Electricity Disaggregation)
---
# Dataset Card for BLUED
## Dataset Description
BLUED (Building-Level fUlly-labeled dataset for Electricity Disaggregation) is a public dataset designed for event-based Non-Intrusive Load Monitoring (NILM) research. It contains high-frequency voltage and current measurements from a single-family home in the United States over one week. The key feature of this dataset is the detailed labeling of appliance state transitions (events), providing ground truth for evaluating event-based disaggregation algorithms. The dataset aims to facilitate the development, testing, and comparison of NILM algorithms.
## Dataset Details
* **Data Collection:**
* Data was collected over one week in October 2011 from a single-family house in Pittsburgh, Pennsylvania.
* Aggregate voltage and current measurements were captured at the main distribution panel using a National Instruments DAQ (NI USB-9215A) at a sampling rate of 12 kHz. Current was measured using split-core current transformers, and voltage was measured using a voltage transformer.
* Ground truth for appliance events was collected using a combination of plug-level power meters (FireFly sensors), environmental sensors (light, sound, vibration, etc.), and circuit-level current measurements.
* Events were defined as changes in power consumption greater than 30 watts lasting at least 5 seconds.
* Timestamps for ground truth events were manually synchronized with the aggregate power signal via visual inspection.
* **Data Content:**
* Raw voltage (one phase) and current (two phases) waveforms sampled at 12 kHz.
* Computed active power at 60 Hz.
* A list of timestamped events, identifying the appliance and the transition type (e.g., on/off).
* Covers approximately 50 electrical appliances, though not all were active or met the event criteria during the collection week.
* Includes 2,482 labeled events in total, with 2,355 attributed to known appliances and 127 from unknown sources (clustered into potentially 11 distinct appliances). Events are split between Phase A (904 events) and Phase B (1578 events).
* **Data Format:** Raw current and voltage files, along with a list of event timestamps. Active power computed at 60Hz is also included.
* **Data Splits:** The paper presents preliminary results using the whole week but suggests future work might involve splitting into training/testing sets.
## Uses
* **Non-Intrusive Load Monitoring (NILM):** Primarily designed for developing and evaluating event-based energy disaggregation algorithms.
* **Appliance Usage Pattern Analysis:** Studying how and when different appliances are used in a residential setting.
* **Occupancy Detection:** Inferring household occupancy based on appliance usage.
* **Energy Management & Efficiency:** Developing strategies for residential energy savings.
* **Anomaly Detection & Fault Diagnostics:** Identifying unusual appliance behavior or potential faults.
* **Assisted Living Applications:** Monitoring activities of daily living through appliance usage.
## Dataset Limitations
* **Duration:** One week of data may not capture the usage patterns of all appliances, especially seasonal ones (like the air conditioner) or those used infrequently (like the dryer).
* **Sensor Frequency Limitation:** The current sensors used had a cutoff frequency around 300 Hz, limiting the analysis of higher-frequency harmonics (beyond the 5th harmonic).
* **Incomplete Ground Truth:** Approximately 5% of events detected in the aggregate signal could not be attributed to the monitored appliances and are labeled as "unknown". Some appliances (~25%) had no registered events meeting the criteria during the collection week.
* **Single Home:** Data represents only one specific home and its occupants' behavior.
## Citation
```bibtex
@inproceedings{anderson2012blued,
title={BLUED: A fully labeled public dataset for event-based non-intrusive load monitoring research},
author={Anderson, Kyle and Ocneanu, Adrian and Benitez, Diego and Carlson, Derrick and Rowe, Anthony and Berg{\'e}s, Mario},
booktitle={Proceedings of the 2nd ACM SIGKDD international workshop on data mining applications in sustainability},
pages={1--8},
year={2012},
organization={ACM}
} |
dgambettaphd/D_llm2_gen8_run0_W_doc1000_synt64_tot128_lr5em5_p1k_SYNLAST | dgambettaphd | 2025-04-30T20:16:52Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:16:49Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 6132841
num_examples: 12000
download_size: 3340631
dataset_size: 6132841
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
microsoft/CoSAlign | microsoft | 2025-04-30T20:15:48Z | 0 | 1 | [
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2025-04-30T20:15:48Z | null | ---
license: cc-by-nc-4.0
---
|
Ahmedaldysty/OPCUA-packets-sharegpt2 | Ahmedaldysty | 2025-04-30T20:08:52Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T20:08:48Z | null | ---
dataset_info:
features:
- name: from_system
dtype: string
- name: from_human
dtype: string
- name: from_gpt
dtype: string
splits:
- name: train
num_bytes: 71359570
num_examples: 100000
download_size: 6184184
dataset_size: 71359570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
msobroza/cg1 | msobroza | 2025-04-30T20:07:10Z | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"text-retrieval",
"question-answering",
"sentence-similarity"
] | 2025-04-30T20:07:08Z | null | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- text-retrieval
- question-answering
- sentence-similarity
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: response
dtype: 'null'
- name: positive_retrieval
dtype: string
- name: negative_retrieval
dtype: string
- name: positive_reranking
dtype: 'null'
- name: negative_reranking
dtype: 'null'
splits:
- name: train
num_bytes: 89634
num_examples: 100
download_size: 54710
dataset_size: 89634
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for cg1
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/msobroza/cg1/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/msobroza/cg1/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"context": "Cardif Libert\u00e9s Emprunteur\n\nCotisations fixes Janvier 2022\n\nCardif Libert\u00e9s Emprunteur\n\nNOTICE Conventions d\u2019assurance collective n\u00b0 2827/736",
"negative_reranking": null,
"negative_retrieval": "Cardif Libert\u00e9s Emprunteur has fixed contributions in January 2022 regarding the convention n\u00b0 1234/567 for the collective insurance of the employee\u0027s car.",
"positive_reranking": null,
"positive_retrieval": "What are the fixed contributions of Cardif Libert\u00e9s Emprunteur in January 2022?",
"question": "What are the fixed contributions of Cardif Libert\u00e9s Emprunteur in January 2022?",
"response": null
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("msobroza/cg1", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("msobroza/cg1")
```
</details>
|
urbas/mbpp100_w_backdoor_fixed | urbas | 2025-04-30T19:43:16Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:43:14Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: test_list
sequence: string
- name: backdoor
dtype: bool
splits:
- name: train
num_bytes: 29031
num_examples: 100
download_size: 16947
dataset_size: 29031
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mervinpraison/harupfall-accelerometer-images-actual | mervinpraison | 2025-04-30T19:24:13Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:45:58Z | null | ---
dataset_info:
features:
- name: sequence
dtype: string
- name: sensor
dtype: string
- name: raw_data
dtype: string
- name: main_label
dtype: string
- name: extracted_labels
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 466088404.0
num_examples: 930
download_size: 95375237
dataset_size: 466088404.0
---
# Dataset Card for "harupfall-accelerometer-images-actual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mervinpraison/harupfall-accelerometer-data-actual | mervinpraison | 2025-04-30T19:22:09Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:21:55Z | null | ---
dataset_info:
features:
- name: sequence
dtype: string
- name: sensor
dtype: string
- name: raw_data
dtype: string
- name: main_label
dtype: string
- name: extracted_labels
dtype: string
splits:
- name: train
num_bytes: 9458457
num_examples: 930
download_size: 0
dataset_size: 9458457
---
# Dataset Card for "harupfall-accelerometer-data-actual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_1_for_gen_5 | HungVu2003 | 2025-04-30T19:08:48Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:08:47Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3586430
num_examples: 12500
download_size: 1877336
dataset_size: 3586430
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_0_for_gen_5 | HungVu2003 | 2025-04-30T19:00:06Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:00:05Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6112706
num_examples: 12500
download_size: 2090382
dataset_size: 6112706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Harini4623/Vitalik_Buterin | Harini4623 | 2025-04-30T19:00:02Z | 0 | 0 | [
"task_categories:question-answering",
"task_categories:table-question-answering",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"region:us",
"code"
] | [
"question-answering",
"table-question-answering"
] | 2025-04-30T18:43:50Z | null | ---
license: mit
task_categories:
- question-answering
- table-question-answering
language:
- en
tags:
- code
size_categories:
- 1K<n<10K
---
# Vitalik Buterin Agent
This workspace contains the following file:
- **Vitalik Buterin Agent.xlsx**: An Excel file with the pertinent data for the Vitalik Buterin Agent project.
## Getting Started
Open the `.xlsx` file with Microsoft Excel or any compatible spreadsheet software.
## License
MIT License |
Raja2/processed_data | Raja2 | 2025-04-30T18:44:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:44:26Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: temp_rag
dtype: string
- name: solution
dtype: string
- name: attempt
dtype: string
- name: thinking_trajectories
dtype: string
splits:
- name: train
num_bytes: 4650004
num_examples: 262
download_size: 1959520
dataset_size: 4650004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arclabmit/eval_koch_act_boxbin_model | arclabmit | 2025-04-30T18:14:57Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-04-30T18:14:40Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 10,
"total_frames": 5630,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.overhead": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
iyosha-huji/stressEval | iyosha-huji | 2025-04-30T18:12:23Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:08:55Z | null | ---
dataset_info:
features:
- name: transcription_id
dtype: string
- name: transcription
dtype: string
- name: description
dtype: string
- name: intonation
dtype: string
- name: interpretation_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: metadata
struct:
- name: gender
dtype: string
- name: language_code
dtype: string
- name: sample_rate_hertz
dtype: int64
- name: voice_name
dtype: string
- name: possible_answers
sequence: string
- name: label
dtype: int64
- name: stress_pattern
struct:
- name: binary
sequence: int64
- name: indices
sequence: int64
- name: words
sequence: string
- name: audio_lm_prompt
dtype: string
splits:
- name: test
num_bytes: 29451897.32142857
num_examples: 218
download_size: 22754357
dataset_size: 29451897.32142857
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
punwaiw/o1-verifiable | punwaiw | 2025-04-30T18:12:13Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:05:29Z | null | ---
dataset_info:
features:
- name: idx
dtype: string
- name: question
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reasoning_content
dtype: string
- name: text
dtype: string
- name: ground_truth
dtype: 'null'
splits:
- name: train
num_bytes: 128583574
num_examples: 23658
download_size: 56011605
dataset_size: 128583574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm2_gen6_run0_W_doc1000_synt64_tot128_lr5em5_p1k_SYNLAST | dgambettaphd | 2025-04-30T18:11:01Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:10:56Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 5188613
num_examples: 10000
download_size: 2896477
dataset_size: 5188613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdversarialRLHF/rloo_pythia410m_tldr6.9b_rm410mdata_mergedsft_prefix_kl0.005_52_eval-dataset | AdversarialRLHF | 2025-04-30T18:10:55Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:10:47Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: generations_rloo_pythia410m_tldr6.9b_rm410mdata_mergedsft_prefix_kl0.005
dtype: string
splits:
- name: validation
num_bytes: 128494549
num_examples: 6447
download_size: 33809015
dataset_size: 128494549
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
erdem-erdem/24-game-qwq-8k | erdem-erdem | 2025-04-30T18:10:34Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T18:10:32Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 71621687
num_examples: 7980
download_size: 31849699
dataset_size: 71621687
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
soynade-research/Wolof-Books | soynade-research | 2025-04-30T17:56:59Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T17:56:58Z | null | ---
dataset_info:
features:
- name: url
dtype: string
- name: content
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1355156
num_examples: 1151
download_size: 806978
dataset_size: 1355156
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shylee/eval_temp | shylee | 2025-04-30T17:52:47Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-30T17:52:39Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1706,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
johnny-katsa/human-edit | johnny-katsa | 2025-04-30T17:51:26Z | 149 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T17:44:35Z | null | ---
dataset_info:
config_name: human-edit-train
features:
- name: input_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 9471452169.238
num_examples: 5751
download_size: 9420452305
dataset_size: 9471452169.238
configs:
- config_name: human-edit-train
data_files:
- split: train
path: human-edit-train/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_11 | HungVu2003 | 2025-04-30T17:50:08Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T17:50:05Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3210743
num_examples: 12498
download_size: 1089057
dataset_size: 3210743
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shylee/eval_temp2 | shylee | 2025-04-30T17:43:56Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-30T17:43:50Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 840,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
twinkle-ai/tw-function-call-reasoning-10k | twinkle-ai | 2025-04-30T15:57:27Z | 3 | 2 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"Taiwan",
"R.O.C",
"zh-tw",
"function-calling",
"twinkle.ai",
"tool"
] | [
"text-generation"
] | 2025-04-30T06:23:19Z | 2 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: query
dtype: string
- name: tools
dtype: string
- name: query_zhtw
dtype: string
- name: think
dtype: string
- name: answer
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 60989042.60824564
num_examples: 10000
download_size: 24870378
dataset_size: 60989042.60824564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- Taiwan
- R.O.C
- zh-tw
- function-calling
- twinkle.ai
- tool
pretty_name: >-
Traditional Chinese Dataset for Function Calling with Chain-of-Thought
Reasoning
size_categories:
- 1K<n<10K
---
# Dataset Card for tw-function-call-reasoning-10k
<!-- Provide a quick summary of the dataset. -->

本資料集為繁體中文版本的函式呼叫(Function Calling)資料集,翻譯自 [AymanTarig/function-calling-v0.2-with-r1-cot](https://huggingface.co/datasets/AymanTarig/function-calling-v0.2-with-r1-cot),而該資料集本身是 [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) 的修正版。我們利用語言模型翻譯後,經人工修改,旨在打造高品質的繁體中文工具使用語料。
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
**tw-function-call-reasoning-10k** 是一個專為語言模型「工具使用能力(Function Calling)」訓練所設計的繁體中文資料集。其內容源自 [AymanTarig/function-calling-v0.2-with-r1-cot](https://huggingface.co/datasets/AymanTarig/function-calling-v0.2-with-r1-cot),該資料集又為 [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) 的修正版。我們透過語言模型將資料轉譯為繁體中文,並保留原始的 Chain-of-Thought(CoT)推理結構。
此資料集可作為未來擴充**繁體中文 function-calling 語料**的基石,並有助於強化 LLM 在實際應用中的推理能力與工具整合能力。
- **Curated by:** [Minyi Chen](https://huggingface.co/minyichen)
- **Funded by:** [APMIC](https://www.apmic.ai/)
- **Shared by:** [Minyi Chen](https://huggingface.co/minyichen)
- **Language(s) (NLP):** Traditional Chinese & English
- **License:** Creative Commons Attribution 4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [twinkle-ai/tw-function-call-reasoning-10k](https://huggingface.co/datasets/twinkle-ai/tw-function-call-reasoning-10k)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
- **語言模型工具使用能力訓練:** 可用於指令微調(Instruction Tuning),提升語言模型在對話中準確選擇工具(Tool Selection)與生成結構化輸入(Tool Input)的能力。
- **Chain-of-Thought 推理建構:** 資料集中保留了逐步思考與推導過程,適合用於訓練具多步驟邏輯推理能力的模型。
- **繁體中文指令式語料建構基礎:** 作為日後構建更大規模繁體中文工具使用資料集的重要起點。
- **代理人系統(LLM Agent)訓練場景模擬:** 可用於模擬 agent 在使用 API 工具或外部函式時的互動語境與結構。
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
- **不當工具呼叫生成:** 本資料集假設輸出格式符合固定結構,並不適用於開放式、無約束的 API 呼叫語境。
- **對資料品質要求極高之產業應用:** 雖已盡力保留語意與格式,但此資料為翻譯自英文語料,部分語句仍可能存有潛在語用或語調偏差,不建議直接部署於高風險應用(如醫療、法律)。
- **用於訓練具偏見或攻擊性的工具選擇模型:** 本資料集不包含針對敏感議題或潛在有害工具操作的處理,不適合用於開發會執行未經審核動作之系統。
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
> ⚠️*注意*: messages 採取 *Hermes* 格式設計。
```json
{
'id', # 樣本唯一編號
'query', # 英文任務指令(原始輸入)
'tools', # 可使用的工具清單(含名稱、參數定義等 JSON 結構)
'query_zhtw', # 指令的繁體中文翻譯版本
'think', # 模型的思考推理過程(繁體中文)
'answer', # 預期執行的工具與參數(JSON 格式)
'messages' # 完整對話歷程(包含角色與訊息內容,用於 SFT 微調)
}
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
本資料集的建立,旨在填補「繁體中文語境下的函式呼叫訓練資料」之嚴重缺口。儘管英文語料(如 [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k))在 tool calling 領域已有初步建構,但繁體中文語料的缺乏導致中文大型語言模型(LLM)在相關任務上的泛化能力受限。
因此,我們以 AymanTarig 所釋出的 [function-calling-v0.2-with-r1-cot](https://huggingface.co/datasets/AymanTarig/function-calling-v0.2-with-r1-cot) 為藍本,進一步運用語言模型將其翻譯為繁體中文,並保留推理(Chain-of-Thought)內容,使其可廣泛應用於指令微調、工具選擇與 agent 推理等繁體中文 LLM 實驗。
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
本資料集是根據 [AymanTarig/function-calling-v0.2-with-r1-cot](https://huggingface.co/datasets/AymanTarig/function-calling-v0.2-with-r1-cot) 所提供的英文函式呼叫資料,利用譯語言模型自動轉譯為繁體中文後,再經人工清洗。
處理流程包括:
- 篩選原始英文資料集中包含 tool calling 和推理步驟的樣本(我們取樣 [AymanTarig/function-calling-v0.2-with-r1-cot](https://huggingface.co/datasets/AymanTarig/function-calling-v0.2-with-r1-cot) 10k 條);
- 翻譯 user, assistant, tool_input 等欄位,保留原始資料格式
- 人工審核部分翻譯結果,確保語意通順與邏輯一致性
- 最終格式與原始資料相同,便於與英文版本並行訓練與比較
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
本資料集為從英文資料翻譯而來,儘管已使用高品質模型進行語意轉換,仍可能存在以下限制:
- **語言風格偏誤:** 原始資料以英文邏輯與對話風格為主,翻譯後可能出現不符合中文語境的表達方式,或過於直接、不自然的語序。
- **工具輸入格式敏感:** 資料中包含大量 tool_input 欄位的 JSON 結構,雖已保留結構正確性,但仍建議訓練前進行驗證清洗,避免因特殊符號造成格式錯誤。
- **語意準確度依賴模型輸出:** 翻譯過程依賴自動化模型,可能遺漏原始推理中部分細節、反應不完全,對需要精準語意理解的應用造成風險。
- **語境偏向開發者工具任務:** 資料主要集中在模擬工具使用情境,如天氣查詢、計算、轉換等任務,可能不適合應用於開放式、情感導向、或文化語境相關的對話建模。
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
- 適用於結構化任務與代理人系統(Agent)訓練,如 chat 工具選擇、API 任務執行等。
- 不建議直接用於無監督學習、生成文本任務,因資料格式具明確結構,且偏重邏輯推理與工具使用場景。
- 建議於訓練前進行資料驗證與格式校正,尤其針對 tool_input 欄位。
- 應搭配人工評估驗證翻譯品質,特別是部署至需高度語意一致性與語言文化敏感度的應用時。
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```yaml
@misc{twinkle2024functioncalling,
title = {twinkle-ai/tw-function-call-reasoning-10k: A Traditional Chinese Dataset for Function Calling with Chain-of-Thought Reasoning},
author = {Twinkle AI},
year = {2025},
note = {Available at: \url{https://huggingface.co/datasets/twinkle-ai/tw-function-call-reasoning-10k}; Translated from AymanTarig/function-calling-v0.2-with-r1-cot}
}
```
## Dataset Card Authors
[Twinkle AI](https://huggingface.co/twinkle-ai)
## Dataset Card Contact
[Twinkle AI](https://huggingface.co/twinkle-ai) |
ttn1410/Volatility_smr | ttn1410 | 2025-04-30T13:10:43Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T17:13:56Z | null | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 65730965
num_examples: 35370
download_size: 10943501
dataset_size: 65730965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
devrev/shanay-demo | devrev | 2025-04-30T13:06:55Z | 0 | 0 | [
"language:en",
"license:mit",
"region:us",
"curator"
] | [] | 2025-04-30T13:06:47Z | null | ---
language: en
license: mit
tags:
- curator
---
<a href="https://github.com/bespokelabsai/curator/">
<img src="https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k/resolve/main/made_with_curator.png" alt="Made with Curator" width=200px>
</a>
## Dataset card for shanay-demo
This dataset was made with [Curator](https://github.com/bespokelabsai/curator/).
## Dataset details
A sample from the dataset:
```python
{
"natural_language_query": "Show me all open tickets",
"search_query": {
"filter": "state:open",
"query": ""
},
"complexity_level": 1
}
```
## Loading the dataset
You can load this dataset using the following code:
```python
from datasets import load_dataset
dataset = load_dataset("devrev/shanay-demo")
```
|
jaeyong2/Magpie-Qwen-8B-Ko | jaeyong2 | 2025-04-30T13:04:52Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T13:04:50Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 716124
num_examples: 1000
download_size: 326802
dataset_size: 716124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZeynepAltundal/orca-math-word-problems-tr-merged-deduplicated | ZeynepAltundal | 2025-04-30T13:04:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T13:04:38Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2952805.5345953004
num_examples: 4029
download_size: 1392149
dataset_size: 2952805.5345953004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kwangchaeko/koch_test | kwangchaeko | 2025-04-30T13:04:39Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"koch",
"tutorial"
] | [
"robotics"
] | 2025-04-30T13:04:26Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- koch
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 2,
"total_frames": 1685,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
shahidul034/filtered_rare_speciesV2 | shahidul034 | 2025-04-30T13:02:51Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:59:58Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: kingdom
dtype: string
- name: phylum
dtype: string
- name: class
dtype: string
- name: order
dtype: string
- name: family
dtype: string
- name: genus
dtype: string
- name: species
dtype: string
- name: sciName
dtype: string
- name: common
dtype: string
splits:
- name: train
num_bytes: 3385900936.350934
num_examples: 9586
- name: test
num_bytes: 878703187.1200659
num_examples: 2397
download_size: 4321632925
dataset_size: 4264604123.4709997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ttn1410/Economic_smr | ttn1410 | 2025-04-30T13:02:15Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T17:21:40Z | null | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 83253389
num_examples: 37530
download_size: 5006839
dataset_size: 83253389
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
willnorris/my-dataset-12 | willnorris | 2025-04-30T12:59:39Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-04-30T12:43:21Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 472,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.cam1": {
"dtype": "video",
"shape": [
480,
640,
3
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam2": {
"dtype": "video",
"shape": [
480,
640,
3
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
]
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Trouvere/hw_nlp_lab4 | Trouvere | 2025-04-30T12:58:21Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T12:32:35Z | null | ---
license: apache-2.0
---
|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_1_quality_metadata | SayantanJoker | 2025-04-30T12:56:47Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:56:45Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 24644471
num_examples: 50000
download_size: 8345109
dataset_size: 24644471
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Minuskid/AndroidControl_700samples_qwen2_5vl_filtered | Minuskid | 2025-04-30T12:56:15Z | 85 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T03:36:24Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: images
sequence: image
- name: problem
dtype: string
- name: answer
dtype: string
- name: image_size
sequence: int64
splits:
- name: train
num_bytes: 214618455.0
num_examples: 634
- name: validation
num_bytes: 49971603.0
num_examples: 158
download_size: 264178935
dataset_size: 264590058.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_9 | HungVu2003 | 2025-04-30T12:54:22Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:54:21Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2824257
num_examples: 12498
download_size: 1538647
dataset_size: 2824257
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EYEDOL/mozilla_commonvoice_naijaYoruba1_preprocessed_train_batch_5 | EYEDOL | 2025-04-30T12:53:56Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:50:26Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: train
num_bytes: 13939154594.875
num_examples: 12961
download_size: 3097334655
dataset_size: 13939154594.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_0_for_gen_9 | HungVu2003 | 2025-04-30T12:51:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:51:03Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4275556
num_examples: 12498
download_size: 1413562
dataset_size: 4275556
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dangdangde/lgb_data_2_label | dangdangde | 2025-04-30T12:46:49Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:27:45Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: label_id
dtype: int64
- name: language
dtype: string
- name: unsloth/Qwen2.5-14B-Instruct-bnb-4bit_label_1
dtype: float64
- name: unsloth/Qwen2.5-14B-Instruct-bnb-4bit_label_2
dtype: float64
- name: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit_label_1
dtype: float64
- name: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit_label_2
dtype: float64
- name: unsloth/gemma-2-9b-it-bnb-4bit_label_1
dtype: float64
- name: unsloth/gemma-2-9b-it-bnb-4bit_label_2
dtype: float64
- name: unsloth/mistral-7b-instruct-v0.3-bnb-4bit_label_1
dtype: float64
- name: unsloth/mistral-7b-instruct-v0.3-bnb-4bit_label_2
dtype: float64
- name: ds
dtype: string
splits:
- name: lgb_data_2_label
num_bytes: 13127110
num_examples: 63512
download_size: 6141595
dataset_size: 13127110
configs:
- config_name: default
data_files:
- split: lgb_data_2_label
path: data/lgb_data_2_label-*
---
|
Dans-DiscountModels/pretokenization-test-3 | Dans-DiscountModels | 2025-04-30T12:37:48Z | 37 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-15T07:19:05Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: attention_mask
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 43247085360
num_examples: 1199994
download_size: 4962409356
dataset_size: 43247085360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ayushnangia/MetaMathFewshot-4to8k | Ayushnangia | 2025-04-30T12:36:48Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:36:24Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 473296171
num_examples: 10000
download_size: 34058968
dataset_size: 473296171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EYEDOL/mozilla_commonvoice_naijaYoruba1_preprocessed_train_batch_4 | EYEDOL | 2025-04-30T12:36:01Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:33:00Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: train
num_bytes: 13926934672.75
num_examples: 12962
download_size: 3078372126
dataset_size: 13926934672.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ttn1410/Momentum_smr | ttn1410 | 2025-04-30T12:29:30Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T17:17:01Z | null | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 101168344
num_examples: 34110
download_size: 15681200
dataset_size: 101168344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ttn1410/Consumer_smr | ttn1410 | 2025-04-30T12:29:22Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T17:19:42Z | null | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 89754791
num_examples: 37290
download_size: 5531728
dataset_size: 89754791
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
doublesizebed/multilingual_ms_en | doublesizebed | 2025-04-30T12:28:46Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T11:54:13Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: audio_filename
dtype: string
- name: prompt
dtype: string
- name: transcription
dtype: string
- name: gender
dtype: string
- name: audio_filepath
dtype: audio
splits:
- name: train
num_bytes: 12993505394.655
num_examples: 247481
- name: test
num_bytes: 975202.0
num_examples: 20
download_size: 13252814519
dataset_size: 12994480596.655
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
shylee/eval_DP_so100_gauze_temp | shylee | 2025-04-30T12:27:24Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-30T12:27:18Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 633,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
gsarti/qe4pe | gsarti | 2025-04-30T12:23:30Z | 830 | 4 | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"source_datasets:Unbabel/TowerEval-Data-v0.1",
"language:en",
"language:it",
"language:nl",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2503.03044",
"region:us",
"machine-translation",
"quality-estimation",
"post-editing",
"translation",
"behavioral-data",
"multidimensional-quality-metric",
"mqm",
"comet",
"qe"
] | [
"translation"
] | 2024-09-28T08:48:51Z | null | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
- expert-generated
language:
- en
- it
- nl
license:
- apache-2.0
size_categories:
- 10K<n<100K
source_datasets:
- Unbabel/TowerEval-Data-v0.1
task_categories:
- translation
pretty_name: qe4pe
tags:
- machine-translation
- quality-estimation
- post-editing
- translation
- behavioral-data
- multidimensional-quality-metric
- mqm
- comet
- qe
configs:
- config_name: main
data_files:
- split: train
path: task/main/processed_main.csv
- config_name: pretask
data_files:
- split: train
path: task/pretask/processed_pretask.csv
- config_name: posttask
data_files:
- split: train
path: task/posttask/processed_posttask.csv
- config_name: oracle_pe
data_files:
- split: train
path: setup/highlights/oracle/processed_oracle_pe.csv
- config_name: pretask_questionnaire
data_files:
- split: train
path: questionnaires/pretask_results.csv
- config_name: posttask_highlight_questionnaire
data_files:
- split: train
path: questionnaires/posttask_highlight_results.csv
- config_name: posttask_no_highlight_questionnaire
data_files:
- split: train
path: questionnaires/posttask_no_highlight_results.csv
---
# Quality Estimation for Post-Editing (QE4PE)
*For more details on QE4PE, see our [paper](https://huggingface.co/papers/2503.03044) and our [Github repository](https://github.com/gsarti/qe4pe)*
## Dataset Description
- **Source:** [Github](https://github.com/gsarti/qe4pe)
- **Paper:** [Arxiv](https://huggingface.co/papers/2503.03044)
- **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
[Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Malvina Nissim](https://malvinanissim.github.io/) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/)
<p float="left">
<img src="https://github.com/gsarti/qe4pe/blob/main/figures/highlevel_qe4pe.png?raw=true" alt="QE4PE annotation pipeline" width=400/>
</p>
>Word-level quality estimation (QE) detects erroneous spans in machine translations, which can direct and facilitate human post-editing. While the accuracy of word-level QE systems has been assessed extensively, their usability and downstream influence on the speed, quality and editing choices of human post-editing remain understudied. Our QE4PE study investigates the impact of word-level QE on machine translation (MT) post-editing in a realistic setting involving 42 professional post-editors across two translation directions. We compare four error-span highlight modalities, including supervised and uncertainty-based word-level QE methods, for identifying potential errors in the outputs of a state-of-the-art neural MT model. Post-editing effort and productivity are estimated by behavioral logs, while quality improvements are assessed by word- and segment-level human annotation. We find that domain, language and editors' speed are critical factors in determining highlights' effectiveness, with modest differences between human-made and automated QE highlights underlining a gap between accuracy and usability in professional workflows.
### Dataset Summary
This dataset provides a convenient access to the processed `pretask`, `main` and `posttask` splits and the questionnaires for the QE4PE study. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform. For the main task, a subset of the data was annotated with Multidimensional Quality Metrics (MQM) by professional annotators.
We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows.
### News 📢
**March 2025**: The QE4PE paper is available on [Arxiv](https://huggingface.co/papers/2503.03044).
**January 2025**: MQM annotations are now available for the `main` task.
**October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉
### Repository Structure
The repository is organized as follows:
```shell
qe4pe/
├── questionnaires/ # Configs and results for pre- and post-task questionnaires for translators
│ ├── pretask_results.csv # Results of the pretask questionnaire, corresponding to the `pretask_questionnaire` configuration
│ ├── posttask_highlight_results.csv # Results of the posttask questionnaire for highlighted modalities, corresponding to the `posttask_highlight_questionnaire` configuration
│ ├── posttask_no_highlight_results.csv # Results of the posttask questionnaire for the `no_highlight` modality, corresponding to the `posttask_no_highlight_questionnaire` configuration
│ └── ... # Configurations reporting the exact questionnaires questions and options.
├── setup/
│ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks
│ ├── qa/ # MQM/ESA annotations for the main task
│ ├── processed/ # Intermediate outputs of the selection process for the main task
│ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs
└── task/
├── example/ # Example folder with task structure
├── main/ # Main task data, logs, outputs and guidelines
│ ├── ...
│ ├── processed_main.csv # Processed main task data, corresponds to the `main` configuration
│ └── README.md # Details about the main task
├── posttask/ # Posttask task data, logs, outputs and guidelines
│ ├── ...
│ ├── processed_main.csv # Processed posttask task data, corresponds to the `posttask` configuration
│ └── README.md # Details about the post-task
└── pretask/ # Pretask data, logs, outputs and guidelines
├── ...
├── processed_pretask.csv # Processed pretask data, corresponds to the `pretask` configuration
└── README.md # Details about the pretask
```
### Languages
The language data of QE4PE is in English (BCP-47 `en`), Italian (BCP-47 `it`) and Dutch (BCP-47 `nl`).
## Dataset Structure
### Data Instances
The dataset contains two configurations, corresponding to the two tasks: `pretask`, `main` and `posttask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase before the main task, in which all translators worked on texts highlighted in the `supervised` modality. `posttask` contains the data collected in the final phase in which all translators worked on texts in the `no_highlight` modality.
### Data Fields
A single entry in the dataframe represents a segment (~sentence) in the dataset, that was machine-translated and post-edited by a professional translator. The following fields are contained in the training set:
|Field |Description |
|------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| **Identification** | |
|`unit_id` | The full entry identifier. Format: `qe4pe-{task_id}-{src_lang}-{tgt_lang}-{doc_id}-{segment_in_doc_id}-{translator_main_task_id}`. |
|`wmt_id` | Identifier of the sentence in the original [WMT23](./data/setup/wmt23/wmttest2023.eng.jsonl) dataset. |
|`wmt_category` | Category of the document: `biomedical` or `social` |
|`doc_id` | The index of the document in the current configuration of the QE4PE dataset containing the current segment. |
|`segment_in_doc_id` | The index of the segment inside the current document. |
|`segment_id` | The index of the segment in the current configurations (i.e. concatenating all segments from all documents in order) |
|`translator_pretask_id` | The identifier for the translator according to the `pretask` format before modality assignments: `tXX`. |
|`translator_main_id` | The identifier for the translator according to the `main` task format after modality assignments: `{highlight_modality}_tXX`. |
|`src_lang` | The source language of the segment. For QE4PE, this is always English (`eng`) |
|`tgt_lang` | The target language of the segment: either Italian (`ita`) or Dutch (`nld`). |
|`highlight_modality` | The highlighting modality used for the segment. Values: `no_highlight`, `oracle`, `supervised`, `unsupervised`. |
| **Text statistics** | |
|`src_num_chars` | Length of the source segment in number of characters. |
|`mt_num_chars` | Length of the machine-translated segment in number of characters. |
|`pe_num_chars` | Length of the post-edited segment in number of characters. |
|`src_num_words` | Length of the source segment in number of words. |
|`mt_num_words` | Length of the machine-translated segment in number of words. |
|`pe_num_words` | Length of the post-edited segment in number of words. |
|`num_minor_highlighted_chars` | Number of characters highlighted as minor errors in the machine-translated text. |
|`num_major_highlighted_chars` | Number of characters highlighted as major errors in the machine-translated text. |
|`num_minor_highlighted_words` | Number of words highlighted as minor errors in the machine-translated text. |
|`num_major_highlighted_words` | Number of words highlighted as major errors in the machine-translated text. |
| **Edits statistics** | |
|`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
|`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
|`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
|`num_words_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
|`tot_words_edits` | Total of all edit types for the sentence. |
|`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
|`num_chars_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
|`num_chars_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
|`num_chars_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
|`num_chars_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
|`tot_chars_edits` | Total of all edit types for the sentence. |
|`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
| **Translation quality**| |
|`mt_bleu_max` | Max BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_bleu_min` | Min BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_bleu_mean` | Mean BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_bleu_std` | Standard deviation of BLEU scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_chrf_max` | Max chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_chrf_min` | Min chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_chrf_mean` | Mean chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_chrf_std` | Standard deviation of chrF scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_ter_max` | Max TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_ter_min` | Min TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_ter_mean` | Mean TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_ter_std` | Standard deviation of TER scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`mt_comet_max` | Max COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|`mt_comet_min` | Min COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|`mt_comet_mean` | Mean COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
|`mt_comet_std` | Standard deviation of COMET sentence-level scores for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|`mt_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the mt_text. |
|`mt_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the mt_text. |
|`pe_bleu_max` | Max BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_bleu_min` | Min BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_bleu_mean` | Mean BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_bleu_std` | Standard deviation of BLEU scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_chrf_max` | Max chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_chrf_min` | Min chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_chrf_mean` | Mean chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_chrf_std` | Standard deviation of chrF scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_ter_max` | Max TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_ter_min` | Min TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_ter_mean` | Mean TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_ter_std` | Standard deviation of TER scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|`pe_comet_max` | Max COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|`pe_comet_min` | Min COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|`pe_comet_mean` | Mean COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
|`pe_comet_std` | Standard deviation of COMET sentence-level scores for the `pe_text` and all other `pe_text` for the corresponding segment using Unbabel/wmt22-comet-da with default parameters. |
|`pe_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the pe_text. |
|`pe_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the pe_text. |
| **Behavioral data** | |
|`doc_num_edits` | Total number of edits performed by the translator on the current document. Only the last edit outputs are considered valid. |
|`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. |
|`doc_edit_time` | Total editing time for the current document in seconds (from `start` to `end`, no times ignored) |
|`doc_edit_time_filtered`| Total editing time for the current document in seconds (from `start` to `end`, >5m pauses between logged actions ignored) |
|`doc_keys_per_min` | Keystrokes per minute computed for the current document using `doc_edit_time_filtered`. |
|`doc_chars_per_min` | Characters per minute computed for the current document using `doc_edit_time_filtered`. |
|`doc_words_per_min` | Words per minute computed for the current document using `doc_edit_time_filtered`. |
|`segment_num_edits` | Total number of edits performed by the translator on the current segment. Only edits for the last edit of the doc are considered valid. |
|`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. |
|`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) |
|`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). |
|`segment_keys_per_min` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. |
|`segment_chars_per_min` | Characters per minute computed for the current segment using `segment_edit_time_filtered`. |
|`segment_words_per_min` | Words per minute computed for the current segment using `segment_edit_time_filtered`. |
|`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. |
|`remove_highlights` | If True, the Clear Highlights button was pressed for this segment (always false for `no_highlight` modality). |
|**Texts and annotations**| |
|`src_text` | The original source segment from WMT23 requiring translation. |
|`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
|`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. |
|`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. |
|`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\` with `\n` to show the three aligned rows). |
|`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\` with `\n` to show the three aligned rows). |
|`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
|**MQM annotations (`main` config only)**| |
|`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. |
|`qa_pe_annotator_id` | Annotator ID for the MQM evaluation of `qa_pe_annotated_text`. |
|`qa_mt_esa_rating` | 0-100 quality rating for the `qa_mt_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
|`qa_pe_esa_rating` | 0-100 quality rating for the `qa_pe_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
|`qa_mt_annotated_text` | Version of `mt_text` annotated with MQM errors. Might differ (only slightly) from `mt_text`, included since `qa_mt_mqm_errors` indices are computed on this string. |
|`qa_pe_annotated_text` | Version of `pe_text` annotated with MQM errors. Might differ (only slightly) from `pe_text`, included since `qa_pe_mqm_errors` indices are computed on this string. |
|`qa_mt_fixed_text` | Proposed correction of `mqm_mt_annotated_text` following MQM annotation. |
|`qa_pe_fixed_text` | Proposed correction of `mqm_pe_annotated_text` following MQM annotation. |
|`qa_mt_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_mt_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_mt_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_mt_fixed_text` for the error span in `qa_mt_annotated_text`. `correction_start`: the start index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
|`qa_pe_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_pe_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `qa_pe_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_pe_fixed_text` for the error span in `qa_pe_annotated_text`. `correction_start`: the start index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
### Data Splits
|`config` | `split`| |
|------------------------------------:|-------:|--------------------------------------------------------------:|
|`main` | `train`| 8100 (51 docs i.e. 324 sents x 25 translators) |
|`pretask` | `train`| 950 (6 docs i.e. 38 sents x 25 translators) |
|`posttask` | `train`| 1200 (8 docs i.e. 50 sents x 24 translators) |
|`oracle_pe` | `train`| 1944 (51 docs i.e. 324 sents x 6 translators) |
|`pretask_questionnaire` | `train`| 26 (all translators, including replaced/replacements) |
|`posttask_highlight_questionnaire` | `train`| 19 (all translators for highlight modalities + 1 replacement) |
|`posttask_no_highlight_questionnaire`| `train`| 6 (all translators for `no_highlight` modality) |
#### Train Split
The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation.
The following is an example of the subject `oracle_t1` post-editing for segment `3` of `doc20` in the `eng-nld` direction of the `main` task. The fields `mt_pe_word_aligned` and `mt_pe_char_aligned` are shown over three lines to provide a visual understanding of their contents.
```python
{
# Identification
"unit_id": "qe4pe-main-eng-nld-20-3-oracle_t1",
"wmt_id": "doc5",
"wmt_category": "biomedical",
"doc_id": 20,
"segment_in_doc_id": 3,
"segment_id": 129,
"translator_pretask_id": "t4",
"translator_main_id": "oracle_t1",
"src_lang": "eng",
"tgt_lang": "nld",
"highlight_modality": "oracle",
# Text statistics
"src_num_chars": 104,
"mt_num_chars": 136,
"pe_num_chars": 106,
"src_num_words": 15,
"mt_num_words": 16,
"pe_num_words": 16,
# Edits statistics
"num_words_insert": 0,
"num_words_delete": 0,
"num_words_substitute": 1,
"num_words_unchanged": 15,
"tot_words_edits": 1,
"wer": 0.0625,
"num_chars_insert": 0,
"num_chars_delete": 0,
"num_chars_substitute": 6,
"num_chars_unchanged": 100,
"tot_chars_edits": 6,
"cer": 0.0566,
# Translation quality
"mt_bleu_max": 100.0,
"mt_bleu_min": 7.159,
"mt_bleu_mean": 68.687,
"mt_bleu_std": 31.287,
"mt_chrf_max": 100.0,
"mt_chrf_min": 45.374,
"mt_chrf_mean": 83.683,
"mt_chrf_std": 16.754,
"mt_ter_max": 100.0,
"mt_ter_min": 0.0,
"mt_ter_mean": 23.912,
"mt_ter_std": 29.274,
"mt_comet_max": 0.977,
"mt_comet_min": 0.837,
"mt_comet_mean": 0.94,
"mt_comet_std": 0.042,
"mt_xcomet_qe": 0.985,
"mt_xcomet_errors": "[]",
"pe_bleu_max": 100.0,
"pe_bleu_min": 11.644,
"pe_bleu_mean": 61.335,
"pe_bleu_std": 28.617,
"pe_chrf_max": 100.0,
"pe_chrf_min": 53.0,
"pe_chrf_mean": 79.173,
"pe_chrf_std": 13.679,
"pe_ter_max": 100.0,
"pe_ter_min": 0.0,
"pe_ter_mean": 28.814,
"pe_ter_std": 28.827,
"pe_comet_max": 0.977,
"pe_comet_min": 0.851,
"pe_comet_mean": 0.937,
"pe_comet_std": 0.035,
"pe_xcomet_qe": 0.984,
"pe_xcomet_errors": "[]",
# Behavioral data
"doc_num_edits": 103,
"doc_edit_order": 20,
"doc_edit_time": 118,
"doc_edit_time_filtered": 118,
"doc_keys_per_min": 52.37,
"doc_chars_per_min": 584.24,
"doc_words_per_min": 79.83,
"segment_num_edits": 9,
"segment_edit_order": 3,
"segment_edit_time": 9,
"segment_edit_time_filtered": 9,
"segment_keys_per_min": 60.0,
"segment_chars_per_min": 906.67,
"segment_words_per_min": 106.67,
"num_enter_actions": 2,
"remove_highlights": False,
# Texts and annotations
"src_text": "The speed of its emerging growth frequently outpaces the development of quality assurance and education.",
"mt_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
"mt_text_highlighted": "De snelheid van de opkomende groei is vaak <minor>sneller</minor> dan de ontwikkeling van kwaliteitsborging en <major>onderwijs.</major>",
"pe_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
"mt_pe_word_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
"PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
" S",
"mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
"PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
" SS SS SS ",
"highlights": """[
{
'text': 'sneller',
'severity': 'minor',
'start': 43,
'end': 50
},
{
'text': 'onderwijs.',
'severity': 'major',
'start': 96,
'end': 106
}
]"""
# QA annotations
"qa_mt_annotator_id": 'qa_nld_3',
"qa_pe_annotator_id": 'qa_nld_1',
"qa_mt_esa_rating": 100.0,
"qa_pe_esa_rating": 80.0,
"qa_mt_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
"qa_pe_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
"qa_mt_fixed_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
"qa_pe_fixed_text": "De snelheid van de ontluikende groei overtreft vaak de ontwikkeling van kwaliteitsborging en onderwijs.",
"qa_mt_mqm_errors": "[]",
"qa_pe_mqm_errors": """[
{
"text": "opkomende",
"text_start": 19,
"text_end": 28,
"correction":
"ontluikende",
"correction_start": 19,
"correction_end": 30,
"description": "Mistranslation - not the correct word",
"mqm_category": "Mistranslation",
"severity": "Minor",
"comment": "",
"edit_order": 1
}
]"""
}
```
The text is provided as-is, without further preprocessing or tokenization.
### Dataset Creation
The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
### QA Annotations
MQM annotations were collected using Google Sheets and highlights were parsed from HTML exported output, ensuring their compliance with well-formedness checks. Out of the original 51 docs (324 segments) in `main`, 24 docs (10 biomedical, 14 social, totaling 148 segments) were samples at random and annotated by professional translators.
## Additional Information
### Metric signatures
The following signatures correspond to the metrics reported in the processed dataframes:
```shell
# Computed using SacreBLEU: https://github.com/mjpost/sacrebleu
BLEU: case:mixed|eff:yes|tok:13a|smooth:exp|version:2.3.1
ChrF: case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.3.1
TER: case:lc|tok:tercom|norm:no|punct:yes|asian:no|version:2.3.1
# Computed using Unbabel COMET: https://github.com/Unbabel/COMET
Comet: Python3.11.9|Comet2.2.2|fp32|Unbabel/wmt22-comet-da
XComet: Python3.10.12|Comet2.2.1|fp32|Unbabel/XCOMET-XXL
```
### Dataset Curators
For problems related to this 🤗 Datasets version, please contact me at [[email protected]](mailto:[email protected]).
### Citation Information
```bibtex
@misc{sarti-etal-2024-qe4pe,
title={{QE4PE}: Word-level Quality Estimation for Human Post-Editing},
author={Gabriele Sarti and Vilém Zouhar and Grzegorz Chrupała and Ana Guerberof-Arenas and Malvina Nissim and Arianna Bisazza},
year={2025},
eprint={2503.03044},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.03044},
}
``` |
liguanwei/RandomPromptLoaderTwo | liguanwei | 2025-04-30T12:19:25Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T11:40:51Z | null | ---
license: apache-2.0
---
|
shylee/eval_DP_so100_gauze_scratch_1e-4_ckpt000750 | shylee | 2025-04-30T12:15:01Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-30T12:14:56Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 69,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
majwadalam/urdu_samples | majwadalam | 2025-04-30T12:11:52Z | 0 | 0 | [
"region:us"
] | [] | 2025-04-30T12:02:44Z | null | ---
dataset_info:
features:
- name: audiopath
dtype: string
- name: text
dtype: string
- name: Normalized text
dtype: string
- name: sampling_rate
dtype: int64
- name: duration
dtype: float64
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 629945847.0
num_examples: 119
download_size: 629168420
dataset_size: 629945847.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_1_for_gen_3 | HungVu2003 | 2025-04-30T12:03:09Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T12:03:08Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3593470
num_examples: 12500
download_size: 1904001
dataset_size: 3593470
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EYEDOL/mozilla_commonvoice_naijaYoruba1_preprocessed_train_batch_2 | EYEDOL | 2025-04-30T12:02:00Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T11:58:54Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: train
num_bytes: 13935584804.75
num_examples: 12962
download_size: 3088877595
dataset_size: 13935584804.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LITTLEHIGH/wiki_kilt_baai_1.5B_bin | LITTLEHIGH | 2025-04-30T11:59:13Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T11:59:13Z | null | ---
license: apache-2.0
---
|
shylee/eval_DP_so100_gauze_IMAGENET_1e-5_ckpt003000 | shylee | 2025-04-30T11:43:21Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-30T11:43:14Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 275,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kaiserbuffle/so101test | kaiserbuffle | 2025-04-30T11:40:02Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-04-30T11:39:57Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 1773,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.base": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
laurabraad/bertscore_df_steering | laurabraad | 2025-04-30T11:38:32Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T11:38:30Z | null | ---
dataset_info:
features:
- name: org_sentence
dtype: string
- name: pred_sentence
dtype: string
- name: flip
dtype: string
- name: precision
dtype: float64
- name: recall
dtype: float64
- name: f1
dtype: float64
splits:
- name: train
num_bytes: 204998
num_examples: 573
download_size: 91558
dataset_size: 204998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
milan-velinovski/ring-feature-chat-dataset-v2-direct | milan-velinovski | 2025-04-30T11:37:26Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T11:37:25Z | null | ---
dataset_info:
features:
- name: ring_id
dtype: string
- name: image_view
dtype: string
- name: category
dtype: string
- name: category_scope
dtype: string
- name: features_prompted
sequence: string
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 44347
num_examples: 26
download_size: 22176
dataset_size: 44347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
orgcatorg/wikipedia | orgcatorg | 2025-04-30T11:36:51Z | 76 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-06T06:18:34Z | null | ---
dataset_info:
- config_name: bn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 1167115208
num_examples: 156143
download_size: 441690826
dataset_size: 1167115208
- config_name: hi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 793684300
num_examples: 166726
download_size: 302408181
dataset_size: 793684300
- config_name: id
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1177273270
num_examples: 688206
download_size: 610697793
dataset_size: 1177273270
- config_name: ms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 442552369
num_examples: 373189
download_size: 220484368
dataset_size: 442552369
- config_name: th
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 4899327
num_examples: 48408
download_size: 2146000
dataset_size: 4899327
- config_name: tl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 53980052
num_examples: 48408
download_size: 30423055
dataset_size: 53980052
- config_name: vi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: en_url
dtype: string
- name: en_title
dtype: string
- name: en_text
dtype: string
splits:
- name: train
num_bytes: 1938478921
num_examples: 1294721
download_size: 896915549
dataset_size: 1938478921
configs:
- config_name: bn
data_files:
- split: train
path: bn/train-*
- config_name: hi
data_files:
- split: train
path: hi/train-*
- config_name: id
data_files:
- split: train
path: id/train-*
- config_name: ms
data_files:
- split: train
path: ms/train-*
- config_name: th
data_files:
- split: train
path: th/train-*
- config_name: tl
data_files:
- split: train
path: tl/train-*
- config_name: vi
data_files:
- split: train
path: vi/train-*
---
|
CarolinePascal/plug_socket_mixed | CarolinePascal | 2025-04-30T11:33:30Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"audio"
] | [
"robotics"
] | 2025-04-30T11:32:57Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- audio
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 15095,
"total_tasks": 1,
"total_videos": 150,
"total_audio": 150,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"audio_path": "audio/chunk-{episode_chunk:03d}/{audio_key}/episode_{episode_index:06d}.m4a",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.camera_1": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false
}
},
"observation.images.camera_2": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false
}
},
"observation.images.camera_3": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false
}
},
"observation.audio.camera_1": {
"dtype": "audio",
"shape": [
1,
1
],
"names": "channels",
"info": {
"has_audio": true,
"audio.channels": 1,
"audio.codec": "aac",
"audio.bit_rate": 69219,
"audio.sample_rate": 48000,
"audio.bit_depth": null,
"audio.channel_layout": "mono"
}
},
"observation.audio.camera_2": {
"dtype": "audio",
"shape": [
1,
1
],
"names": "channels",
"info": {
"has_audio": true,
"audio.channels": 1,
"audio.codec": "aac",
"audio.bit_rate": 69223,
"audio.sample_rate": 48000,
"audio.bit_depth": null,
"audio.channel_layout": "mono"
}
},
"observation.audio.camera_3": {
"dtype": "audio",
"shape": [
1,
1
],
"names": "channels",
"info": {
"has_audio": true,
"audio.channels": 1,
"audio.codec": "aac",
"audio.bit_rate": 69251,
"audio.sample_rate": 48000,
"audio.bit_depth": null,
"audio.channel_layout": "mono"
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
alucchi/Qwen2.5-1.5B-Instruct_n1000_e12_oadam0.0001_b16_1_a10_flash_compact | alucchi | 2025-04-30T11:15:18Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T11:15:06Z | null | ---
dataset_info:
- config_name: default
features:
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
sequence:
sequence: int64
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: int64
splits:
- name: train
num_bytes: 56512
num_examples: 10
download_size: 14742
dataset_size: 56512
- config_name: main
features:
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
sequence:
sequence: int64
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: int64
splits:
- name: train
num_bytes: 56512
num_examples: 10
download_size: 14742
dataset_size: 56512
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: main
data_files:
- split: train
path: main/train-*
---
|
dgambettaphd/D_llm2_gen5_run0_X_doc1000_synt64_tot128_lr5em5_SYNLAST | dgambettaphd | 2025-04-30T11:13:19Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T11:12:13Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 11848576
num_examples: 21000
download_size: 7145287
dataset_size: 11848576
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ParkSY/data_nerf_diversity_more_concept_with_colormap | ParkSY | 2025-04-30T11:12:51Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:24:43Z | null | ---
dataset_info:
features:
- name: input_image
dtype: string
- name: edit_prompt
dtype: string
- name: edited_image
dtype: string
- name: label
dtype: int64
- name: depthmap
dtype: string
- name: water_color
dtype: string
splits:
- name: train
num_bytes: 2406306
num_examples: 6544
download_size: 211289
dataset_size: 2406306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linoyts/wan_wrap_effect | linoyts | 2025-04-30T11:12:11Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"text-to-video"
] | [] | 2025-04-30T11:12:08Z | null |
---
license: apache-2.0
tags:
- text-to-video
---
This dataset contains videos generated using Wan 2.1 T2V 14B.
|
timescale/pgai-docs | timescale | 2025-04-30T11:10:13Z | 123 | 0 | [
"license:postgresql",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T15:50:39Z | null | ---
license: postgresql
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: path
dtype: string
- name: title
dtype: string
- name: contents
dtype: string
splits:
- name: train
num_bytes: 191373
num_examples: 16
download_size: 83458
dataset_size: 191373
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_8 | HungVu2003 | 2025-04-30T10:52:00Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:51:59Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2821591
num_examples: 12498
download_size: 1529759
dataset_size: 2821591
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rudrakshmital19/report4-dialogsum | Rudrakshmital19 | 2025-04-30T10:48:32Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:48:30Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: make
dtype: string
- name: model
dtype: string
- name: year
dtype: string
- name: registration_number
dtype: string
- name: engine_number
dtype: string
- name: chassis_number
dtype: string
- name: fuel_type
dtype: string
- name: mileage
dtype: string
- name: inspection_type
dtype: string
- name: requester_name
dtype: string
- name: section
sequence: string
- name: dialogue
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 4757
num_examples: 1
- name: validation
num_bytes: 6084
num_examples: 1
- name: test
num_bytes: 4790
num_examples: 1
download_size: 57882
dataset_size: 15631
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
dijisoz23/nutuk_mid_dataset | dijisoz23 | 2025-04-30T10:28:42Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:14:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: question_type
dtype: string
splits:
- name: train
num_bytes: 735396
num_examples: 2580
download_size: 373475
dataset_size: 735396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ttn1410/Volume_smr | ttn1410 | 2025-04-30T10:27:54Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T17:03:18Z | null | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 39490384
num_examples: 15390
download_size: 7484076
dataset_size: 39490384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Neooooo/structured_paper_summarization | Neooooo | 2025-04-30T10:13:24Z | 126 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-25T02:24:04Z | null | ---
dataset_info:
features:
- name: title
dtype: string
- name: keywords
sequence: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1851098097.8341584
num_examples: 145064
- name: test
num_bytes: 78063099.39124106
num_examples: 6653
download_size: 626249553
dataset_size: 1929161197.2253995
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# structured_paper_summarization
A **151 k‑example** dataset of chat‐style *prompt → structured abstract* pairs, built from ~19 000 research papers across business, management, information‑systems and social‑science domains. Each example shows the full paper (body text) being summarised into a five‑section Emerald‑style structured abstract (Purpose, Design/methodology/approach, Findings, Practical implications, Originality/value).
---
## Why this dataset?
Large‑language models (LLMs) frequently struggle to:
1. **Condense long scientific prose** into factual, concise summaries.
2. **Follow rigid output structures** (e.g. subsection headings).
This dataset targets both challenges simultaneously, enabling fine‑tuning or instruction‑tuning of LLMs that must output *structured* scholarly abstracts.
---
## At a glance
| Split | Rows | Size (compressed) |
|-------|------|-------------------|
| train | **145 067** | 626 MB |
| test | **6 650** | 29 MB |
| **Total** | **151 717** | ≈655 MB |
<sup>Counts taken from the Hugging Face viewer on 2025‑04‑29.</sup>
---
## Data schema
```text
{
title: string # Paper title
keywords: list[string] # Author‑supplied keywords (0‑23)
messages: list[dict] length ≥ 2 # ChatML‑style conversation
}
```
### `messages` format
Each list contains alternating dictionaries with:
- `role`: either `"user"` or `"assistant"`.
- `content`: UTF‑8 text.
Typical pattern (2 items):
```jsonc
[
{
"role": "user",
"content": "Summarize the following paper into structured abstract.\n\n<full paper text>"
},
{
"role": "assistant",
"content": "Purpose: …\nDesign/methodology/approach: …\nFindings: …\nPractical implications: …\nOriginality/value: …"
}
]
```
Some papers are longer and may be truncated to ~8 k tokens.
---
## Loading the data
```python
from datasets import load_dataset
ds_train = load_dataset(
"Neooooo/structured_paper_summarization", split="train"
)
print(ds_train[0]["messages"][1]["content"][:500])
```
The dataset is stored as Apache **Parquet** with streaming support; the example above requires ~5 s to start iterating with no local download.
---
## Suggested use‑cases
* **Instruction‑tuning** chat LLMs for long‑document summarisation.
* Research on **controlled text generation** and output formatting.
* Training **retrieval‑augmented systems** that must cite sections of the source paper.
---
## Source & construction
1. Full‑text articles were collected via institutional access to the *Emerald Insight* corpus (open‑access + subscription).
2. The canonical *structured abstract* supplied by each journal was extracted as ground truth.
3. The article’s main body was embedded into a prompt of the form shown above.
4. Data were converted to Hugging Face `datasets` ➜ auto‑parquet.
No additional manual cleaning was performed; typos and OCR artefacts may persist.
---
## Licensing & acceptable use
The article texts are **copyright their original publishers/authors** and are redistributed here *solely for non‑commercial research*. By using this dataset you agree to:
- **Not** redistribute the raw paper texts.
- Cite the original articles in any derivative work.
- Abide by Emerald’s usage policy and your local copyright laws.
The **metadata & structured abstracts** are released under **CC BY‑NC 4.0**. For commercial licensing, please contact the original rights‑holders.
---
## Citation
If you use this dataset, please cite:
```text
@dataset{hu_2025_structured_prompts,
author = {Xingyu Hu},
title = {structured_paper_summarization},
year = 2025,
url = {https://huggingface.co/datasets/Neooooo/structured_paper_summarization},
note = {Version 1.0}
}
```
---
## Contributions
Feel free to open PRs to:
- Fix metadata errors.
- Provide additional splits (validation, domain‑specific subsets).
- Add scripts for evaluation or preprocessing.
---
*Happy summarising!*
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_2_for_gen_2 | HungVu2003 | 2025-04-30T10:11:13Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:11:12Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2960469
num_examples: 12500
download_size: 1441478
dataset_size: 2960469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Blancy/first-filtered-openr1-math-220k | Blancy | 2025-04-30T10:03:39Z | 86 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T07:39:50Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
dtype: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
dtype: 'null'
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2289054244
num_examples: 64968
download_size: 1017569408
dataset_size: 2289054244
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EYEDOL/naija_commonvoice_naijaEnglish1_preprocessed_train_batch_3 | EYEDOL | 2025-04-30T09:56:45Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T09:56:34Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: test
num_bytes: 448700764.0
num_examples: 341
download_size: 199057603
dataset_size: 448700764.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
EYEDOL/naija_commonvoice_naijaEnglish1_preprocessed_train_batch_1 | EYEDOL | 2025-04-30T09:53:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T09:52:20Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: train
num_bytes: 3646837252.875
num_examples: 2721
download_size: 1649827139
dataset_size: 3646837252.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_7 | HungVu2003 | 2025-04-30T09:39:36Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T09:39:35Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3066102
num_examples: 12498
download_size: 1102086
dataset_size: 3066102
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chsekhar2u/tripmate | chsekhar2u | 2025-04-30T09:29:15Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T09:29:09Z | null | ---
license: apache-2.0
---
|
twinkle-ai/tw-reasoning-instruct-50k | twinkle-ai | 2025-04-30T09:21:30Z | 77 | 4 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"Taiwan",
"R.O.C",
"zh-tw",
"reasoning",
"legal",
"instruct",
"cha"
] | [
"text-generation"
] | 2025-04-01T09:01:52Z | 3 | ---
license: mit
task_categories:
- text-generation
language:
- en
- zh
tags:
- Taiwan
- R.O.C
- zh-tw
- reasoning
- legal
- instruct
- cha
size_categories:
- 10K<n<100K
pretty_name: Traditional Chinese Reasoning Instructions for Taiwan-Based NLP Tasks
---
# Dataset Card for tw-reasoning-instruct-50k

<!-- Provide a quick summary of the dataset. -->
**tw-reasoning-instruct-50k** 是一個精選的 繁體中文(台灣) 推理資料集,旨在提升語言模型於逐步邏輯思考、解釋生成與語言理解等任務中的表現。資料內容涵蓋日常思辨、教育對話、法律推理等多元主題,並結合「思考步驟」與「最終答案」的結構設計,引導模型以更清晰、條理分明的方式進行推論與回應,特別強調符合台灣本地語言與文化背景的應用需求。
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
本資料集專為發展具備強大推理能力的繁體中文大型語言模型(Large Reasoning Models, LRM)所設計,內容深度結合台灣的語言與文化脈絡。每筆資料通常包含使用者的提問、模型的回應,以及清楚的推理過程。資料集設計目標為培養模型具備類人邏輯的逐步思考與解釋能力。
此資料集適用於訓練與評估以下任務:
- 台灣社會的日常推理
- 教育性對話
- 以解釋為導向的生成任務
所有內容均以繁體中文(zh-tw)撰寫或改寫,確保符合台灣社會常見用語與語境。
- **Curated by:** [Huang Liang Hsun](https://huggingface.co/lianghsun)
- **Funded by:** [APMIC](https://www.apmic.ai/)
- **Shared by:** [Huang Liang Hsun](https://huggingface.co/lianghsun)
- **Language(s) (NLP):** Tranditional Chinese and English
- **License:** MIT
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [lianghsun/tw-reasoning-instruct-50k](https://huggingface.co/datasets/lianghsun/tw-reasoning-instruct-50k)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
本資料集主要用於訓練與評估繁體中文語言模型在下列任務中的表現:
- 具邏輯性的步驟式推理(step-by-step reasoning)
- 回答時附帶清楚說明的解釋生成任務(explanation generation)
- 教育類對話與知識傳遞
- 法律、學術或通識領域的理解與分析任務
特別適用於強化模型在繁體中文語境中之邏輯推論與表達能力。
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
- 自動生成法律意見或做出實際法律建議
- 用於高風險決策系統,如醫療診斷、金融投資建議等
- 任何違反社會倫理之惡意用途,例如散佈錯誤資訊、操弄輿論或偽造對話內容
- 用於與繁體中文語境不相符的任務,如簡體中文、大陸用語習慣分析等,可能導致表現失準
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
```json
{
"input": # 使用者提出的問題
"think": # 模型的推理思考過程(以 <think> 標記開頭)
"output": # 模型對使用者問題的最終回應
"conversations": [
{"from": "human", "value": ""}, # 與 input 相同的問題
{"from": "gpt", "value": ""} # 包含推理與回答的完整對話內容
],
"seed": # 該問題的主題或原始問題意圖描述
}
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
本資料集旨在補足目前繁體中文語境中缺乏高品質推理訓練資料的缺口。現有多數中文語料偏重於問答、閒聊或簡單指令回應,缺乏能培養模型「逐步思考」、「多層邏輯分析」與「具備理由的回答」能力的資料。本資料集專注於收集與製作涵蓋教育、法律、學術、哲學與社會議題的推理型資料,並強調以繁體中文表達人類邏輯思考過程。其目的如下:
- 建立符合臺灣語言文化的邏輯推理標準資料。
- 提供訓練模型產出更具解釋力、邏輯性與知識性的輸出樣本。
- 支援教育應用、法律科技與邏輯理解等 AI 任務的模型開發。
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
本資料集以繁體中文(臺灣)為核心,可能不適用於其他語境。推理內容來自模型生成,雖強調邏輯性,仍可能出現錯誤或偏誤,使用時需謹慎驗證。不建議直接應用於法律、醫療、金融等高風險場域。教育應用中亦應搭配人類審閱,避免過度依賴模型輸出。
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
使用者應充分了解本資料集在語言範疇、邏輯推理與知識正確性方面的潛在偏誤與限制。建議僅將其用於研究或模型訓練階段,避免直接應用於高風險情境,如法律或醫療決策。所有輸出內容應搭配人類審查與驗證,以確保其可靠性與適切性。
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
如果您使用本資料集,請引用:
```yaml
@misc{huang2025twreasoninginstruct,
author = {Twinkle AI, Huang, Liang Hsun},
title = {tw-reasoning-instruct: Traditional Chinese Reasoning Instructions for Taiwan-Based NLP Tasks},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/lianghsun/tw-reasoning-instruct}},
note = {A curated reasoning dataset in Traditional Chinese (Taiwan), designed for instruction-tuned LLM development.}
}
```
## Dataset Card Authors
[Twinkle AI](https://huggingface.co/twinkle-ai)
## Dataset Card Contact
[Twinkle AI](https://huggingface.co/twinkle-ai) |
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_15 | SayantanJoker | 2025-04-30T09:14:50Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T06:19:44Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 20643582737.575
num_examples: 34675
download_size: 20582018763
dataset_size: 20643582737.575
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
korbih/ui-sensei-curriculum-0-test-20250424_213955-completepostprocessed-grpo-format | korbih | 2025-04-30T09:14:48Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T14:06:54Z | null | ---
dataset_info:
features:
- name: base_uid
dtype: string
- name: step
dtype: int32
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: image_name
dtype: string
- name: start_url
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 5843914.0
num_examples: 81
download_size: 4959265
dataset_size: 5843914.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
korbih/ui-sensei-curriculum-0-test-20250424_213955-complete-double_checked | korbih | 2025-04-30T09:12:49Z | 4 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T14:21:58Z | null | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: trial_number
dtype: int32
- name: task_description
dtype: string
- name: start_url
dtype: string
- name: is_success
dtype: bool
- name: is_shortest
dtype: bool
- name: evaluator_thoughts
dtype: string
- name: evaluator_status
dtype: string
- name: run_error
dtype: string
- name: step_index
dtype: int32
- name: url_at_step
dtype: string
- name: prompt
dtype: string
- name: action
dtype: string
- name: screenshot
struct:
- name: bytes
dtype: binary
- name: path
dtype: string
- name: annotated_screenshot
struct:
- name: bytes
dtype: binary
- name: path
dtype: string
- name: is_success_original
dtype: bool
- name: evaluator_thoughts_original
dtype: string
- name: double_checked
dtype: bool
splits:
- name: train
num_bytes: 2158616644
num_examples: 5550
download_size: 1092809999
dataset_size: 2158616644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linoyts/wan_wiping_surface | linoyts | 2025-04-30T09:08:10Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"text-to-video"
] | [] | 2025-04-30T09:08:08Z | null |
---
license: apache-2.0
tags:
- text-to-video
---
This dataset contains videos generated using Wan 2.1 T2V 14B.
|
hypaai/nv_yo_2_2_wspr | hypaai | 2025-04-30T09:07:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T08:53:50Z | null | ---
dataset_info:
features:
- name: input_features
sequence:
sequence:
sequence: float32
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 49803731024.0
num_examples: 51846
download_size: 6666267551
dataset_size: 49803731024.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
russwest404/kilt_index | russwest404 | 2025-04-30T09:06:22Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-30T07:16:28Z | null | ---
license: apache-2.0
---
|
danelbaz/some_name_for_hub | danelbaz | 2025-04-30T09:04:22Z | 235 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T07:37:09Z | null | ---
dataset_info:
config_name: None--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
configs:
- config_name: None--evals
data_files:
- split: train
path: None--evals/train-*
---
|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_14 | SayantanJoker | 2025-04-30T09:02:48Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T08:42:48Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 29755767031.0
num_examples: 50000
download_size: 29657721744
dataset_size: 29755767031.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_1_for_gen_2 | HungVu2003 | 2025-04-30T08:58:41Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T08:58:39Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3630075
num_examples: 12500
download_size: 1911834
dataset_size: 3630075
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/multi-gold-37M-e2-N1.50M-mix8-iter3 | kothasuhas | 2025-04-30T08:51:33Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T08:49:53Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3866043980
num_examples: 1500000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 2439224326
dataset_size: 3874618959
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
maritaca-ai/oab-bench | maritaca-ai | 2025-04-29T19:35:21Z | 2 | 2 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T19:24:44Z | 2 | ---
dataset_info:
- config_name: questions
features:
- name: question_id
dtype: string
- name: category
dtype: string
- name: statement
dtype: string
- name: turns
sequence: string
- name: values
sequence: float64
- name: system
dtype: string
splits:
- name: train
num_bytes: 381205
num_examples: 105
download_size: 116453
dataset_size: 381205
- config_name: guidelines
features:
- name: question_id
dtype: string
- name: answer_id
dtype: string
- name: model_id
dtype: string
- name: choices
list:
- name: index
dtype: int64
- name: tstamp
dtype: float64
- name: turns
sequence: string
- name: tstamp
dtype: float64
splits:
- name: train
num_bytes: 407473
num_examples: 105
download_size: 114197
dataset_size: 407473
configs:
- config_name: questions
data_files:
- split: train
path: questions/train-*
- config_name: guidelines
data_files:
- split: train
path: guidelines/train-*
---
# OAB-Bench
| [**Paper**](https://arxiv.org/abs/XXXX.XXXXX) | [**Code**](https://github.com/maritaca-ai/oab-bench) |
OAB-Bench is a benchmark for evaluating Large Language Models (LLMs) on legal writing tasks, specifically designed for the Brazilian Bar Examination (OAB). The benchmark comprises 105 questions across seven areas of law from recent editions of the exam.
- OAB-Bench evaluates LLMs on their ability to write legal documents and answer discursive questions
- The benchmark includes comprehensive evaluation guidelines used by human examiners
- Results show that frontier models like Claude-3.5 Sonnet can achieve passing grades (≥6.0) in most exams
- The evaluation pipeline uses LLMs as automated judges, achieving strong correlation with human scores
## Results
Our evaluation of four LLMs on OAB-Bench shows:
| Model | Average Score | Passing Rate | Best Area |
| --- | --- | --- | --- |
| Claude-3.5 Sonnet | 7.93 | 100% | Constitutional Law (8.43) |
| GPT-4o | 6.87 | 86% | Civil Law (7.42) |
| Sabiá-3 | 6.55 | 76% | Labor Law (7.17) |
| Qwen2.5-72B | 5.21 | 24% | Administrative Law (7.00) |
The LLM judge (o1) shows strong correlation with human scores when evaluating approved exams, with Mean Absolute Error (MAE) ranging from 0.04 to 0.28 across different law areas.
## Citation
If you find this work helpful, please cite our paper:
```
@inproceedings{pires2025automatic,
title={Automatic Legal Writing Evaluation of LLMs},
author={Pires, Ramon and Malaquias Junior, Roseval and Nogueira, Rodrigo},
booktitle={Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL)},
year={2025}
}
```
|
Qwen/PolyMath | Qwen | 2025-04-29T08:00:02Z | 18 | 3 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.18428",
"region:us"
] | [] | 2025-04-25T14:37:14Z | 3 | ---
configs:
- config_name: ar
data_files:
- split: top
path: ar/top.parquet
- split: high
path: ar/high.parquet
- split: medium
path: ar/medium.parquet
- split: low
path: ar/low.parquet
- config_name: bn
data_files:
- split: top
path: bn/top.parquet
- split: high
path: bn/high.parquet
- split: medium
path: bn/medium.parquet
- split: low
path: bn/low.parquet
- config_name: de
data_files:
- split: top
path: de/top.parquet
- split: high
path: de/high.parquet
- split: medium
path: de/medium.parquet
- split: low
path: de/low.parquet
- config_name: en
data_files:
- split: top
path: en/top.parquet
- split: high
path: en/high.parquet
- split: medium
path: en/medium.parquet
- split: low
path: en/low.parquet
- config_name: es
data_files:
- split: top
path: es/top.parquet
- split: high
path: es/high.parquet
- split: medium
path: es/medium.parquet
- split: low
path: es/low.parquet
- config_name: fr
data_files:
- split: top
path: fr/top.parquet
- split: high
path: fr/high.parquet
- split: medium
path: fr/medium.parquet
- split: low
path: fr/low.parquet
- config_name: id
data_files:
- split: top
path: id/top.parquet
- split: high
path: id/high.parquet
- split: medium
path: id/medium.parquet
- split: low
path: id/low.parquet
- config_name: it
data_files:
- split: top
path: it/top.parquet
- split: high
path: it/high.parquet
- split: medium
path: it/medium.parquet
- split: low
path: it/low.parquet
- config_name: ja
data_files:
- split: top
path: ja/top.parquet
- split: high
path: ja/high.parquet
- split: medium
path: ja/medium.parquet
- split: low
path: ja/low.parquet
- config_name: ko
data_files:
- split: top
path: ko/top.parquet
- split: high
path: ko/high.parquet
- split: medium
path: ko/medium.parquet
- split: low
path: ko/low.parquet
- config_name: ms
data_files:
- split: top
path: ms/top.parquet
- split: high
path: ms/high.parquet
- split: medium
path: ms/medium.parquet
- split: low
path: ms/low.parquet
- config_name: pt
data_files:
- split: top
path: pt/top.parquet
- split: high
path: pt/high.parquet
- split: medium
path: pt/medium.parquet
- split: low
path: pt/low.parquet
- config_name: ru
data_files:
- split: top
path: ru/top.parquet
- split: high
path: ru/high.parquet
- split: medium
path: ru/medium.parquet
- split: low
path: ru/low.parquet
- config_name: sw
data_files:
- split: top
path: sw/top.parquet
- split: high
path: sw/high.parquet
- split: medium
path: sw/medium.parquet
- split: low
path: sw/low.parquet
- config_name: te
data_files:
- split: top
path: te/top.parquet
- split: high
path: te/high.parquet
- split: medium
path: te/medium.parquet
- split: low
path: te/low.parquet
- config_name: th
data_files:
- split: top
path: th/top.parquet
- split: high
path: th/high.parquet
- split: medium
path: th/medium.parquet
- split: low
path: th/low.parquet
- config_name: vi
data_files:
- split: top
path: vi/top.parquet
- split: high
path: vi/high.parquet
- split: medium
path: vi/medium.parquet
- split: low
path: vi/low.parquet
- config_name: zh
data_files:
- split: top
path: zh/top.parquet
- split: high
path: zh/high.parquet
- split: medium
path: zh/medium.parquet
- split: low
path: zh/low.parquet
---
<div align="center">
<h2>
PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts
</h2>
</div>
<div align="center">
<a href="https://arxiv.org/abs/2504.18428">
<img src="https://img.shields.io/badge/arXiv-2504.18428-b31b1b.svg?logo=arxiv" alt="arXiv Badge"/>
</a>
<a href="https://github.com/QwenLM/PolyMath">
<img src="https://img.shields.io/badge/GitHub-Code-black?logo=github" alt="GitHub Badge"/>
</a>
</div>
**PolyMath** is a multilingual mathematical reasoning benchmark **covering 18 languages** and **4 easy-to-hard difficulty levels**. Our benchmark ensures *difficulty comprehensiveness*, *language diversity*, and *high-quality translation*, making it a highly discriminative multilingual mathematical benchmark in the era of reasoning LLMs.
- 📈 **Broad Difficulty Range:** PolyMath defines and partitions mathematical difficulty across four levels using two core dimensions: **Thought Depth** and **Knowledge Breadth**, ranging from K-12 to Olympiad and advanced frontier mathematics, with 125 problems per language at each level.
<div align="center">
<img src="_ASSETS/level.png" alt="logo" width="85%"/>
</div>
- 🌍 **Language Diversity:** Each problem in PolyMath is available in **18 parallel language versions**, encompassing **over 75% of the world’s native speakers** and major language families, ensuring diversity across both high-resource and low-resource languages.
<div align="center">
<img src="_ASSETS/language.png" alt="logo" width="50%"/>
</div>
- 🧑🏫 **High-Quality Annotation:** Each problem translation is **calibrated by language experts**, avoiding direct use of LLM-generated outputs and ensuring precise term and logical clarity.
<div align="center">
<img src="_ASSETS/human.png" alt="logo" width="90%"/>
</div>
---
## 📊 Main Results
The leaderboard is continuously updated! See https://qwen-polymath.github.io/#leaderboard
---
## 📄 Citation
If you use **PolyMath** in your research, please cite us:
```bibtex
@misc{wang2025polymath,
title={PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts},
author={Yiming Wang and Pei Zhang and Jialong Tang and Haoran Wei and Baosong Yang and Rui Wang and Chenshu Sun and Feitong Sun and Jiran Zhang and Junxuan Wu and Qiqian Cang and Yichang Zhang and Fei Huang and Junyang Lin and Fei Huang and Jingren Zhou},
year={2025},
eprint={2504.18428},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.18428},
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.