datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gambitzpro/gambutz | gambitzpro | 2025-04-01T10:49:15Z | 61 | 0 | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | [] | 2025-04-01T10:49:15Z | 0 | ---
license: bigscience-bloom-rail-1.0
---
|
psg777/blacktape_filtered1 | psg777 | 2025-06-17T15:09:33Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-17T15:09:13Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 58,
"total_frames": 39331,
"total_tasks": 1,
"total_videos": 116,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:58"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.birdseyeview": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
shaznin/task4_fix_generation_less | shaznin | 2025-03-11T21:46:37Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T21:40:09Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 13854873
num_examples: 4889
- name: test
num_bytes: 3496057
num_examples: 1223
download_size: 7778864
dataset_size: 17350930
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
MisterMango23/m18_Hellcat_tank_dataset | MisterMango23 | 2025-05-13T17:21:57Z | 0 | 0 | [
"license:artistic-2.0",
"size_categories:n<1K",
"format:text",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-13T17:21:32Z | 0 | ---
license: artistic-2.0
---
|
AI4Protein/DeepET_Topt | AI4Protein | 2025-04-12T09:53:12Z | 146 | 0 | [
"task_categories:text-classification",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2503.15438",
"region:us",
"protein",
"downstream task"
] | [
"text-classification"
] | 2025-01-30T09:08:58Z | 0 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- protein
- downstream task
---
# DeepET_Topt Dataset
- Description: protein optimum temperature.
- Number of labels: 1
- Problem Type: regression
- Columns:
- aa_seq: protein amino acid sequence
# Github
VenusFactory: A Unified Platform for Protein Engineering Data Retrieval and Language Model Fine-Tuning
https://github.com/ai4protein/VenusFactory
# Citation
Please cite our work if you use our dataset.
```
@article{tan2025venusfactory,
title={VenusFactory: A Unified Platform for Protein Engineering Data Retrieval and Language Model Fine-Tuning},
author={Tan, Yang and Liu, Chen and Gao, Jingyuan and Wu, Banghao and Li, Mingchen and Wang, Ruilin and Zhang, Lingrong and Yu, Huiqun and Fan, Guisheng and Hong, Liang and Zhou, Bingxin},
journal={arXiv preprint arXiv:2503.15438},
year={2025}
}
``` |
charleyong/so100_y_cube | charleyong | 2025-05-05T02:09:24Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"yc_demo"
] | [
"robotics"
] | 2025-05-05T02:08:36Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- yc_demo
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 29735,
"total_tasks": 1,
"total_videos": 50,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
DETAIM/parque-recreacional-datos | DETAIM | 2025-01-31T21:10:22Z | 37 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-31T21:10:16Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: preguntas
num_bytes: 6527
num_examples: 48
- name: tarifas
num_bytes: 645
num_examples: 8
- name: contacto
num_bytes: 326
num_examples: 4
- name: reservas
num_bytes: 293
num_examples: 3
download_size: 8118
dataset_size: 7791
configs:
- config_name: default
data_files:
- split: preguntas
path: data/preguntas-*
- split: tarifas
path: data/tarifas-*
- split: contacto
path: data/contacto-*
- split: reservas
path: data/reservas-*
---
|
LogicNet-Subnet/Aristole | LogicNet-Subnet | 2025-01-18T17:03:42Z | 19 | 1 | [
"task_categories:question-answering",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2025-01-18T16:50:32Z | 0 | ---
dataset_info:
features:
- name: Instruction
dtype: string
- name: Input
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 634349269
num_examples: 303496
download_size: 149717506
dataset_size: 634349269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- question-answering
---
# Dataset Name
This dataset contains instructions, inputs, and responses formatted for training language models. It is designed to help models understand and generate responses based on given instructions and inputs.
## Dataset Structure
The dataset is structured with the following features:
- **Instruction**: A string containing the task description or question.
- **Input**: A string providing additional context or options.
- **Response**: A string with the expected answer or completion. |
abdulmannan-01/rag_combined_dataset_orca_and_openscholar_messages_format | abdulmannan-01 | 2025-01-09T14:02:12Z | 17 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-09T14:00:10Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: dataset_source
dtype: string
splits:
- name: train
num_bytes: 1396796323
num_examples: 180135
download_size: 663503269
dataset_size: 1396796323
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
artao/x_dataset_158 | artao | 2024-12-03T01:01:55Z | 16 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2024-12-03T01:01:52Z | 0 | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** artao/x_dataset_158
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5G75HCGsuHpPdCfsKgPszqzMqV5cf2KyLmUcifb39g954AXk
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{artao2024datauniversex_dataset_158,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={artao},
year={2024},
url={https://huggingface.co/datasets/artao/x_dataset_158},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 5089
- **Date Range:** 2017-10-07T00:00:00Z to 2024-12-03T00:00:00Z
- **Last Updated:** 2024-12-03T01:01:54Z
### Data Distribution
- Tweets with hashtags: 9.61%
- Tweets without hashtags: 90.39%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | #bitcoin | 123 | 25.15% |
| 2 | #btc | 79 | 16.16% |
| 3 | #crypto | 20 | 4.09% |
| 4 | #dogecoin | 16 | 3.27% |
| 5 | #blockchain | 12 | 2.45% |
| 6 | #thdtjsdn | 7 | 1.43% |
| 7 | #xrp | 7 | 1.43% |
| 8 | #entrepreneur | 4 | 0.82% |
| 9 | #swisstronik | 4 | 0.82% |
| 10 | #doge | 4 | 0.82% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2024-12-03T01:01:51Z | 4600 | 4600 |
| 2024-12-03T01:01:54Z | 489 | 5089 |
|
paulrichmond/astro_gen0 | paulrichmond | 2024-11-20T14:55:27Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-20T08:07:28Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: abstract
dtype: string
- name: prompt
dtype: string
- name: y_true
dtype: string
- name: comp_Llama-2-7b-hf
dtype: string
- name: preds_Llama-2-7b-hf
dtype: string
- name: comp_Llama-3.1-8B
dtype: string
- name: preds_Llama-3.1-8B
dtype: string
- name: comp_astrollama_4bit
dtype: string
- name: preds_astrollama_4bit
dtype: string
splits:
- name: test
num_bytes: 829787
num_examples: 50
download_size: 475338
dataset_size: 829787
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
Generated with the following parameters
- max_new_tokens: 1024
- min_new_tokens: 1
- temperature: 0.8
- do_sample: true |
kothasuhas/qasper-eval | kothasuhas | 2024-10-17T21:35:34Z | 11 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-17T21:35:30Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: full_text
sequence:
- name: section_name
dtype: string
- name: paragraphs
list: string
- name: qas
sequence:
- name: question
dtype: string
- name: question_id
dtype: string
- name: nlp_background
dtype: string
- name: topic_background
dtype: string
- name: paper_read
dtype: string
- name: search_query
dtype: string
- name: question_writer
dtype: string
- name: answers
sequence:
- name: answer
struct:
- name: unanswerable
dtype: bool
- name: extractive_spans
sequence: string
- name: yes_no
dtype: bool
- name: free_form_answer
dtype: string
- name: evidence
sequence: string
- name: highlighted_evidence
sequence: string
- name: annotation_id
dtype: string
- name: worker_id
dtype: string
- name: figures_and_tables
sequence:
- name: caption
dtype: string
- name: file
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 57730105.362776026
num_examples: 1000
download_size: 27937783
dataset_size: 57730105.362776026
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Charlie911/tmmluplus_CKIP_segmented | Charlie911 | 2024-10-13T20:48:14Z | 1,435 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-11T18:59:36Z | 0 | ---
dataset_info:
- config_name: accounting
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 143040
num_examples: 191
- name: validation
num_bytes: 13850
num_examples: 21
- name: train
num_bytes: 4043
num_examples: 5
download_size: 102340
dataset_size: 160933
- config_name: administrative_law
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 166677
num_examples: 420
- name: validation
num_bytes: 20304
num_examples: 47
- name: train
num_bytes: 2664
num_examples: 5
download_size: 127869
dataset_size: 189645
- config_name: advance_chemistry
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 30931
num_examples: 123
- name: validation
num_bytes: 3905
num_examples: 14
- name: train
num_bytes: 1523
num_examples: 5
download_size: 39856
dataset_size: 36359
- config_name: agriculture
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 40368
num_examples: 151
- name: validation
num_bytes: 5711
num_examples: 17
- name: train
num_bytes: 1558
num_examples: 5
download_size: 46692
dataset_size: 47637
- config_name: anti_money_laundering
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 52448
num_examples: 134
- name: validation
num_bytes: 5678
num_examples: 15
- name: train
num_bytes: 2605
num_examples: 5
download_size: 54669
dataset_size: 60731
- config_name: auditing
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 267454
num_examples: 550
- name: validation
num_bytes: 29600
num_examples: 61
- name: train
num_bytes: 1820
num_examples: 5
download_size: 168630
dataset_size: 298874
- config_name: basic_medical_science
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 281510
num_examples: 954
- name: validation
num_bytes: 29866
num_examples: 106
- name: train
num_bytes: 1451
num_examples: 5
download_size: 219592
dataset_size: 312827
- config_name: business_management
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 37448
num_examples: 139
- name: validation
num_bytes: 5218
num_examples: 16
- name: train
num_bytes: 1098
num_examples: 5
download_size: 45758
dataset_size: 43764
- config_name: chinese_language_and_literature
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 135040
num_examples: 199
- name: validation
num_bytes: 15790
num_examples: 22
- name: train
num_bytes: 1960
num_examples: 5
download_size: 128450
dataset_size: 152790
- config_name: clinical_psychology
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 52728
num_examples: 125
- name: validation
num_bytes: 5650
num_examples: 14
- name: train
num_bytes: 1884
num_examples: 5
download_size: 56726
dataset_size: 60262
- config_name: computer_science
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 52940
num_examples: 174
- name: validation
num_bytes: 5108
num_examples: 19
- name: train
num_bytes: 1834
num_examples: 5
download_size: 54462
dataset_size: 59882
- config_name: culinary_skills
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 82263
num_examples: 292
- name: validation
num_bytes: 10690
num_examples: 33
- name: train
num_bytes: 1310
num_examples: 5
download_size: 80630
dataset_size: 94263
- config_name: dentistry
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 138558
num_examples: 399
- name: validation
num_bytes: 16953
num_examples: 44
- name: train
num_bytes: 1494
num_examples: 5
download_size: 120479
dataset_size: 157005
- config_name: economics
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 140568
num_examples: 393
- name: validation
num_bytes: 14283
num_examples: 44
- name: train
num_bytes: 1896
num_examples: 5
download_size: 104619
dataset_size: 156747
- config_name: education
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 42228
num_examples: 124
- name: validation
num_bytes: 6290
num_examples: 14
- name: train
num_bytes: 1659
num_examples: 5
download_size: 48847
dataset_size: 50177
- config_name: education_(profession_level)
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 194634
num_examples: 486
- name: validation
num_bytes: 24269
num_examples: 54
- name: train
num_bytes: 3278
num_examples: 5
download_size: 159782
dataset_size: 222181
- config_name: educational_psychology
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 67148
num_examples: 176
- name: validation
num_bytes: 8339
num_examples: 20
- name: train
num_bytes: 2251
num_examples: 5
download_size: 67025
dataset_size: 77738
- config_name: engineering_math
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 34125
num_examples: 103
- name: validation
num_bytes: 4824
num_examples: 11
- name: train
num_bytes: 1879
num_examples: 5
download_size: 40244
dataset_size: 40828
- config_name: finance_banking
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 54589
num_examples: 135
- name: validation
num_bytes: 7056
num_examples: 15
- name: train
num_bytes: 2098
num_examples: 5
download_size: 56632
dataset_size: 63743
- config_name: financial_analysis
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 109183
num_examples: 382
- name: validation
num_bytes: 12539
num_examples: 42
- name: train
num_bytes: 1261
num_examples: 5
download_size: 78306
dataset_size: 122983
- config_name: fire_science
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 32243
num_examples: 124
- name: validation
num_bytes: 3650
num_examples: 14
- name: train
num_bytes: 1543
num_examples: 5
download_size: 37588
dataset_size: 37436
- config_name: general_principles_of_law
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 46714
num_examples: 106
- name: validation
num_bytes: 4537
num_examples: 12
- name: train
num_bytes: 1646
num_examples: 5
download_size: 46577
dataset_size: 52897
- config_name: geography_of_taiwan
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 210561
num_examples: 768
- name: validation
num_bytes: 21786
num_examples: 85
- name: train
num_bytes: 1526
num_examples: 5
download_size: 164828
dataset_size: 233873
- config_name: human_behavior
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 124141
num_examples: 309
- name: validation
num_bytes: 13616
num_examples: 34
- name: train
num_bytes: 2044
num_examples: 5
download_size: 106900
dataset_size: 139801
- config_name: insurance_studies
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 331593
num_examples: 760
- name: validation
num_bytes: 38208
num_examples: 85
- name: train
num_bytes: 1879
num_examples: 5
download_size: 201610
dataset_size: 371680
- config_name: introduction_to_law
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 90751
num_examples: 237
- name: validation
num_bytes: 10037
num_examples: 26
- name: train
num_bytes: 4171
num_examples: 5
download_size: 84062
dataset_size: 104959
- config_name: jce_humanities
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 115000
num_examples: 90
- name: validation
num_bytes: 15653
num_examples: 10
- name: train
num_bytes: 7267
num_examples: 5
download_size: 104863
dataset_size: 137920
- config_name: junior_chemistry
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 49490
num_examples: 209
- name: validation
num_bytes: 5858
num_examples: 23
- name: train
num_bytes: 1367
num_examples: 5
download_size: 50749
dataset_size: 56715
- config_name: junior_chinese_exam
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 185937
num_examples: 175
- name: validation
num_bytes: 21539
num_examples: 20
- name: train
num_bytes: 8624
num_examples: 5
download_size: 178050
dataset_size: 216100
- config_name: junior_math_exam
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 46111
num_examples: 175
- name: validation
num_bytes: 4830
num_examples: 19
- name: train
num_bytes: 1397
num_examples: 5
download_size: 42644
dataset_size: 52338
- config_name: junior_science_exam
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 105465
num_examples: 213
- name: validation
num_bytes: 10317
num_examples: 24
- name: train
num_bytes: 2472
num_examples: 5
download_size: 91545
dataset_size: 118254
- config_name: junior_social_studies
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 94368
num_examples: 126
- name: validation
num_bytes: 11772
num_examples: 14
- name: train
num_bytes: 4339
num_examples: 5
download_size: 98766
dataset_size: 110479
- config_name: linear_algebra
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 15585
num_examples: 42
- name: validation
num_bytes: 1343
num_examples: 5
- name: train
num_bytes: 1854
num_examples: 5
download_size: 24724
dataset_size: 18782
- config_name: logic_reasoning
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 34227
num_examples: 139
- name: validation
num_bytes: 4688
num_examples: 16
- name: train
num_bytes: 1407
num_examples: 5
download_size: 35386
dataset_size: 40322
- config_name: macroeconomics
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 112483
num_examples: 411
- name: validation
num_bytes: 12103
num_examples: 46
- name: train
num_bytes: 1362
num_examples: 5
download_size: 86110
dataset_size: 125948
- config_name: management_accounting
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 100456
num_examples: 215
- name: validation
num_bytes: 11119
num_examples: 24
- name: train
num_bytes: 2060
num_examples: 5
download_size: 73174
dataset_size: 113635
- config_name: marketing_management
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 27688
num_examples: 93
- name: validation
num_bytes: 2978
num_examples: 10
- name: train
num_bytes: 1580
num_examples: 5
download_size: 36166
dataset_size: 32246
- config_name: mechanical
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 25281
num_examples: 118
- name: validation
num_bytes: 2838
num_examples: 13
- name: train
num_bytes: 745
num_examples: 5
download_size: 33533
dataset_size: 28864
- config_name: music
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 81863
num_examples: 278
- name: validation
num_bytes: 7865
num_examples: 31
- name: train
num_bytes: 1425
num_examples: 5
download_size: 76495
dataset_size: 91153
- config_name: national_protection
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 45554
num_examples: 211
- name: validation
num_bytes: 5467
num_examples: 24
- name: train
num_bytes: 957
num_examples: 5
download_size: 48171
dataset_size: 51978
- config_name: nautical_science
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 140168
num_examples: 551
- name: validation
num_bytes: 15832
num_examples: 61
- name: train
num_bytes: 841
num_examples: 5
download_size: 112779
dataset_size: 156841
- config_name: occupational_therapy_for_psychological_disorders
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 232474
num_examples: 543
- name: validation
num_bytes: 23737
num_examples: 60
- name: train
num_bytes: 1839
num_examples: 5
download_size: 337115
dataset_size: 258050
- config_name: official_document_management
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 52950
num_examples: 222
- name: validation
num_bytes: 5174
num_examples: 25
- name: train
num_bytes: 1469
num_examples: 5
download_size: 48239
dataset_size: 59593
- config_name: optometry
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 344118
num_examples: 920
- name: validation
num_bytes: 39905
num_examples: 102
- name: train
num_bytes: 1648
num_examples: 5
download_size: 229374
dataset_size: 385671
- config_name: organic_chemistry
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 26090
num_examples: 109
- name: validation
num_bytes: 2739
num_examples: 12
- name: train
num_bytes: 1141
num_examples: 5
download_size: 35418
dataset_size: 29970
- config_name: pharmacology
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 130347
num_examples: 577
- name: validation
num_bytes: 14847
num_examples: 64
- name: train
num_bytes: 709
num_examples: 5
download_size: 105656
dataset_size: 145903
- config_name: pharmacy
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 107534
num_examples: 391
- name: validation
num_bytes: 13147
num_examples: 44
- name: train
num_bytes: 1162
num_examples: 5
download_size: 88119
dataset_size: 121843
- config_name: physical_education
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 39726
num_examples: 179
- name: validation
num_bytes: 4635
num_examples: 20
- name: train
num_bytes: 980
num_examples: 5
download_size: 43601
dataset_size: 45341
- config_name: physics
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 30285
num_examples: 97
- name: validation
num_bytes: 3109
num_examples: 11
- name: train
num_bytes: 1123
num_examples: 5
download_size: 35044
dataset_size: 34517
- config_name: politic_science
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 242147
num_examples: 995
- name: validation
num_bytes: 28166
num_examples: 111
- name: train
num_bytes: 1271
num_examples: 5
download_size: 176571
dataset_size: 271584
- config_name: real_estate
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 36568
num_examples: 92
- name: validation
num_bytes: 3779
num_examples: 10
- name: train
num_bytes: 2595
num_examples: 5
download_size: 42406
dataset_size: 42942
- config_name: secondary_physics
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 45889
num_examples: 112
- name: validation
num_bytes: 5903
num_examples: 13
- name: train
num_bytes: 1630
num_examples: 5
download_size: 49062
dataset_size: 53422
- config_name: statistics_and_machine_learning
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 74744
num_examples: 224
- name: validation
num_bytes: 8680
num_examples: 25
- name: train
num_bytes: 2292
num_examples: 5
download_size: 74012
dataset_size: 85716
- config_name: taiwanese_hokkien
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 37305
num_examples: 129
- name: validation
num_bytes: 2371
num_examples: 14
- name: train
num_bytes: 2236
num_examples: 5
download_size: 44953
dataset_size: 41912
- config_name: taxation
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 158992
num_examples: 375
- name: validation
num_bytes: 19343
num_examples: 42
- name: train
num_bytes: 1992
num_examples: 5
download_size: 116039
dataset_size: 180327
- config_name: technical
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 68955
num_examples: 402
- name: validation
num_bytes: 8824
num_examples: 45
- name: train
num_bytes: 759
num_examples: 5
download_size: 68667
dataset_size: 78538
- config_name: three_principles_of_people
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 24551
num_examples: 139
- name: validation
num_bytes: 2708
num_examples: 16
- name: train
num_bytes: 943
num_examples: 5
download_size: 30560
dataset_size: 28202
- config_name: trade
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 165194
num_examples: 502
- name: validation
num_bytes: 18257
num_examples: 56
- name: train
num_bytes: 1520
num_examples: 5
download_size: 115008
dataset_size: 184971
- config_name: traditional_chinese_medicine_clinical_medicine
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 95060
num_examples: 278
- name: validation
num_bytes: 9934
num_examples: 31
- name: train
num_bytes: 1546
num_examples: 5
download_size: 87490
dataset_size: 106540
- config_name: trust_practice
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 151302
num_examples: 401
- name: validation
num_bytes: 17361
num_examples: 45
- name: train
num_bytes: 2636
num_examples: 5
download_size: 110992
dataset_size: 171299
- config_name: ttqav2
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 39149
num_examples: 113
- name: validation
num_bytes: 3163
num_examples: 13
- name: train
num_bytes: 2235
num_examples: 5
download_size: 43817
dataset_size: 44547
- config_name: tve_chinese_language
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 613018
num_examples: 483
- name: validation
num_bytes: 69563
num_examples: 54
- name: train
num_bytes: 6192
num_examples: 5
download_size: 485161
dataset_size: 688773
- config_name: tve_design
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 165982
num_examples: 480
- name: validation
num_bytes: 18318
num_examples: 53
- name: train
num_bytes: 2207
num_examples: 5
download_size: 137067
dataset_size: 186507
- config_name: tve_mathematics
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 40585
num_examples: 150
- name: validation
num_bytes: 4025
num_examples: 17
- name: train
num_bytes: 1225
num_examples: 5
download_size: 39996
dataset_size: 45835
- config_name: tve_natural_sciences
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 127145
num_examples: 424
- name: validation
num_bytes: 15929
num_examples: 47
- name: train
num_bytes: 2122
num_examples: 5
download_size: 114330
dataset_size: 145196
- config_name: veterinary_pathology
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 72833
num_examples: 283
- name: validation
num_bytes: 8634
num_examples: 32
- name: train
num_bytes: 1511
num_examples: 5
download_size: 66314
dataset_size: 82978
- config_name: veterinary_pharmacology
features:
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 109812
num_examples: 540
- name: validation
num_bytes: 11697
num_examples: 60
- name: train
num_bytes: 1054
num_examples: 5
download_size: 89864
dataset_size: 122563
configs:
- config_name: accounting
data_files:
- split: test
path: accounting/test-*
- split: validation
path: accounting/validation-*
- split: train
path: accounting/train-*
- config_name: administrative_law
data_files:
- split: test
path: administrative_law/test-*
- split: validation
path: administrative_law/validation-*
- split: train
path: administrative_law/train-*
- config_name: advance_chemistry
data_files:
- split: test
path: advance_chemistry/test-*
- split: validation
path: advance_chemistry/validation-*
- split: train
path: advance_chemistry/train-*
- config_name: agriculture
data_files:
- split: test
path: agriculture/test-*
- split: validation
path: agriculture/validation-*
- split: train
path: agriculture/train-*
- config_name: anti_money_laundering
data_files:
- split: test
path: anti_money_laundering/test-*
- split: validation
path: anti_money_laundering/validation-*
- split: train
path: anti_money_laundering/train-*
- config_name: auditing
data_files:
- split: test
path: auditing/test-*
- split: validation
path: auditing/validation-*
- split: train
path: auditing/train-*
- config_name: basic_medical_science
data_files:
- split: test
path: basic_medical_science/test-*
- split: validation
path: basic_medical_science/validation-*
- split: train
path: basic_medical_science/train-*
- config_name: business_management
data_files:
- split: test
path: business_management/test-*
- split: validation
path: business_management/validation-*
- split: train
path: business_management/train-*
- config_name: chinese_language_and_literature
data_files:
- split: test
path: chinese_language_and_literature/test-*
- split: validation
path: chinese_language_and_literature/validation-*
- split: train
path: chinese_language_and_literature/train-*
- config_name: clinical_psychology
data_files:
- split: test
path: clinical_psychology/test-*
- split: validation
path: clinical_psychology/validation-*
- split: train
path: clinical_psychology/train-*
- config_name: computer_science
data_files:
- split: test
path: computer_science/test-*
- split: validation
path: computer_science/validation-*
- split: train
path: computer_science/train-*
- config_name: culinary_skills
data_files:
- split: test
path: culinary_skills/test-*
- split: validation
path: culinary_skills/validation-*
- split: train
path: culinary_skills/train-*
- config_name: dentistry
data_files:
- split: test
path: dentistry/test-*
- split: validation
path: dentistry/validation-*
- split: train
path: dentistry/train-*
- config_name: economics
data_files:
- split: test
path: economics/test-*
- split: validation
path: economics/validation-*
- split: train
path: economics/train-*
- config_name: education
data_files:
- split: test
path: education/test-*
- split: validation
path: education/validation-*
- split: train
path: education/train-*
- config_name: education_(profession_level)
data_files:
- split: test
path: education_(profession_level)/test-*
- split: validation
path: education_(profession_level)/validation-*
- split: train
path: education_(profession_level)/train-*
- config_name: educational_psychology
data_files:
- split: test
path: educational_psychology/test-*
- split: validation
path: educational_psychology/validation-*
- split: train
path: educational_psychology/train-*
- config_name: engineering_math
data_files:
- split: test
path: engineering_math/test-*
- split: validation
path: engineering_math/validation-*
- split: train
path: engineering_math/train-*
- config_name: finance_banking
data_files:
- split: test
path: finance_banking/test-*
- split: validation
path: finance_banking/validation-*
- split: train
path: finance_banking/train-*
- config_name: financial_analysis
data_files:
- split: test
path: financial_analysis/test-*
- split: validation
path: financial_analysis/validation-*
- split: train
path: financial_analysis/train-*
- config_name: fire_science
data_files:
- split: test
path: fire_science/test-*
- split: validation
path: fire_science/validation-*
- split: train
path: fire_science/train-*
- config_name: general_principles_of_law
data_files:
- split: test
path: general_principles_of_law/test-*
- split: validation
path: general_principles_of_law/validation-*
- split: train
path: general_principles_of_law/train-*
- config_name: geography_of_taiwan
data_files:
- split: test
path: geography_of_taiwan/test-*
- split: validation
path: geography_of_taiwan/validation-*
- split: train
path: geography_of_taiwan/train-*
- config_name: human_behavior
data_files:
- split: test
path: human_behavior/test-*
- split: validation
path: human_behavior/validation-*
- split: train
path: human_behavior/train-*
- config_name: insurance_studies
data_files:
- split: test
path: insurance_studies/test-*
- split: validation
path: insurance_studies/validation-*
- split: train
path: insurance_studies/train-*
- config_name: introduction_to_law
data_files:
- split: test
path: introduction_to_law/test-*
- split: validation
path: introduction_to_law/validation-*
- split: train
path: introduction_to_law/train-*
- config_name: jce_humanities
data_files:
- split: test
path: jce_humanities/test-*
- split: validation
path: jce_humanities/validation-*
- split: train
path: jce_humanities/train-*
- config_name: junior_chemistry
data_files:
- split: test
path: junior_chemistry/test-*
- split: validation
path: junior_chemistry/validation-*
- split: train
path: junior_chemistry/train-*
- config_name: junior_chinese_exam
data_files:
- split: test
path: junior_chinese_exam/test-*
- split: validation
path: junior_chinese_exam/validation-*
- split: train
path: junior_chinese_exam/train-*
- config_name: junior_math_exam
data_files:
- split: test
path: junior_math_exam/test-*
- split: validation
path: junior_math_exam/validation-*
- split: train
path: junior_math_exam/train-*
- config_name: junior_science_exam
data_files:
- split: test
path: junior_science_exam/test-*
- split: validation
path: junior_science_exam/validation-*
- split: train
path: junior_science_exam/train-*
- config_name: junior_social_studies
data_files:
- split: test
path: junior_social_studies/test-*
- split: validation
path: junior_social_studies/validation-*
- split: train
path: junior_social_studies/train-*
- config_name: linear_algebra
data_files:
- split: test
path: linear_algebra/test-*
- split: validation
path: linear_algebra/validation-*
- split: train
path: linear_algebra/train-*
- config_name: logic_reasoning
data_files:
- split: test
path: logic_reasoning/test-*
- split: validation
path: logic_reasoning/validation-*
- split: train
path: logic_reasoning/train-*
- config_name: macroeconomics
data_files:
- split: test
path: macroeconomics/test-*
- split: validation
path: macroeconomics/validation-*
- split: train
path: macroeconomics/train-*
- config_name: management_accounting
data_files:
- split: test
path: management_accounting/test-*
- split: validation
path: management_accounting/validation-*
- split: train
path: management_accounting/train-*
- config_name: marketing_management
data_files:
- split: test
path: marketing_management/test-*
- split: validation
path: marketing_management/validation-*
- split: train
path: marketing_management/train-*
- config_name: mechanical
data_files:
- split: test
path: mechanical/test-*
- split: validation
path: mechanical/validation-*
- split: train
path: mechanical/train-*
- config_name: music
data_files:
- split: test
path: music/test-*
- split: validation
path: music/validation-*
- split: train
path: music/train-*
- config_name: national_protection
data_files:
- split: test
path: national_protection/test-*
- split: validation
path: national_protection/validation-*
- split: train
path: national_protection/train-*
- config_name: nautical_science
data_files:
- split: test
path: nautical_science/test-*
- split: validation
path: nautical_science/validation-*
- split: train
path: nautical_science/train-*
- config_name: occupational_therapy_for_psychological_disorders
data_files:
- split: test
path: occupational_therapy_for_psychological_disorders/test-*
- split: validation
path: occupational_therapy_for_psychological_disorders/validation-*
- split: train
path: occupational_therapy_for_psychological_disorders/train-*
- config_name: official_document_management
data_files:
- split: test
path: official_document_management/test-*
- split: validation
path: official_document_management/validation-*
- split: train
path: official_document_management/train-*
- config_name: optometry
data_files:
- split: test
path: optometry/test-*
- split: validation
path: optometry/validation-*
- split: train
path: optometry/train-*
- config_name: organic_chemistry
data_files:
- split: test
path: organic_chemistry/test-*
- split: validation
path: organic_chemistry/validation-*
- split: train
path: organic_chemistry/train-*
- config_name: pharmacology
data_files:
- split: test
path: pharmacology/test-*
- split: validation
path: pharmacology/validation-*
- split: train
path: pharmacology/train-*
- config_name: pharmacy
data_files:
- split: test
path: pharmacy/test-*
- split: validation
path: pharmacy/validation-*
- split: train
path: pharmacy/train-*
- config_name: physical_education
data_files:
- split: test
path: physical_education/test-*
- split: validation
path: physical_education/validation-*
- split: train
path: physical_education/train-*
- config_name: physics
data_files:
- split: test
path: physics/test-*
- split: validation
path: physics/validation-*
- split: train
path: physics/train-*
- config_name: politic_science
data_files:
- split: test
path: politic_science/test-*
- split: validation
path: politic_science/validation-*
- split: train
path: politic_science/train-*
- config_name: real_estate
data_files:
- split: test
path: real_estate/test-*
- split: validation
path: real_estate/validation-*
- split: train
path: real_estate/train-*
- config_name: secondary_physics
data_files:
- split: test
path: secondary_physics/test-*
- split: validation
path: secondary_physics/validation-*
- split: train
path: secondary_physics/train-*
- config_name: statistics_and_machine_learning
data_files:
- split: test
path: statistics_and_machine_learning/test-*
- split: validation
path: statistics_and_machine_learning/validation-*
- split: train
path: statistics_and_machine_learning/train-*
- config_name: taiwanese_hokkien
data_files:
- split: test
path: taiwanese_hokkien/test-*
- split: validation
path: taiwanese_hokkien/validation-*
- split: train
path: taiwanese_hokkien/train-*
- config_name: taxation
data_files:
- split: test
path: taxation/test-*
- split: validation
path: taxation/validation-*
- split: train
path: taxation/train-*
- config_name: technical
data_files:
- split: test
path: technical/test-*
- split: validation
path: technical/validation-*
- split: train
path: technical/train-*
- config_name: three_principles_of_people
data_files:
- split: test
path: three_principles_of_people/test-*
- split: validation
path: three_principles_of_people/validation-*
- split: train
path: three_principles_of_people/train-*
- config_name: trade
data_files:
- split: test
path: trade/test-*
- split: validation
path: trade/validation-*
- split: train
path: trade/train-*
- config_name: traditional_chinese_medicine_clinical_medicine
data_files:
- split: test
path: traditional_chinese_medicine_clinical_medicine/test-*
- split: validation
path: traditional_chinese_medicine_clinical_medicine/validation-*
- split: train
path: traditional_chinese_medicine_clinical_medicine/train-*
- config_name: trust_practice
data_files:
- split: test
path: trust_practice/test-*
- split: validation
path: trust_practice/validation-*
- split: train
path: trust_practice/train-*
- config_name: ttqav2
data_files:
- split: test
path: ttqav2/test-*
- split: validation
path: ttqav2/validation-*
- split: train
path: ttqav2/train-*
- config_name: tve_chinese_language
data_files:
- split: test
path: tve_chinese_language/test-*
- split: validation
path: tve_chinese_language/validation-*
- split: train
path: tve_chinese_language/train-*
- config_name: tve_design
data_files:
- split: test
path: tve_design/test-*
- split: validation
path: tve_design/validation-*
- split: train
path: tve_design/train-*
- config_name: tve_mathematics
data_files:
- split: test
path: tve_mathematics/test-*
- split: validation
path: tve_mathematics/validation-*
- split: train
path: tve_mathematics/train-*
- config_name: tve_natural_sciences
data_files:
- split: test
path: tve_natural_sciences/test-*
- split: validation
path: tve_natural_sciences/validation-*
- split: train
path: tve_natural_sciences/train-*
- config_name: veterinary_pathology
data_files:
- split: test
path: veterinary_pathology/test-*
- split: validation
path: veterinary_pathology/validation-*
- split: train
path: veterinary_pathology/train-*
- config_name: veterinary_pharmacology
data_files:
- split: test
path: veterinary_pharmacology/test-*
- split: validation
path: veterinary_pharmacology/validation-*
- split: train
path: veterinary_pharmacology/train-*
---
|
sagar27kumar/ECG-XRAY-dataset | sagar27kumar | 2025-01-25T00:40:01Z | 72 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-25T00:29:38Z | 0 | ---
dataset_info:
features:
- name: image
dtype: string
- name: label
dtype: int64
- name: pixel_values
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1646447245
num_examples: 928
- name: test
num_bytes: 794835448
num_examples: 448
- name: validation
num_bytes: 837423369
num_examples: 472
download_size: 647076823
dataset_size: 3278706062
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
Yuyeong/rw_cora_nbw_50_cycle | Yuyeong | 2025-04-22T01:25:43Z | 16 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-22T01:22:25Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
- name: group_idx
dtype: int64
- name: node_idx
dtype: int64
splits:
- name: train_seed0
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed0
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed0
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed1
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed1
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed1
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed2
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed2
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed2
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed3
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed3
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed3
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed4
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed4
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed4
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed5
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed5
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed5
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed6
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed6
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed6
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed7
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed7
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed7
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed8
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed8
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed8
num_bytes: 230809.38731218697
num_examples: 300
- name: train_seed9
num_bytes: 184339764.0
num_examples: 239600
- name: validation_seed9
num_bytes: 230040.02268781303
num_examples: 299
- name: test_seed9
num_bytes: 230809.38731218697
num_examples: 300
download_size: 979163390
dataset_size: 1848006134.1000001
configs:
- config_name: default
data_files:
- split: train_seed0
path: data/train_seed0-*
- split: validation_seed0
path: data/validation_seed0-*
- split: test_seed0
path: data/test_seed0-*
- split: train_seed1
path: data/train_seed1-*
- split: validation_seed1
path: data/validation_seed1-*
- split: test_seed1
path: data/test_seed1-*
- split: train_seed2
path: data/train_seed2-*
- split: validation_seed2
path: data/validation_seed2-*
- split: test_seed2
path: data/test_seed2-*
- split: train_seed3
path: data/train_seed3-*
- split: validation_seed3
path: data/validation_seed3-*
- split: test_seed3
path: data/test_seed3-*
- split: train_seed4
path: data/train_seed4-*
- split: validation_seed4
path: data/validation_seed4-*
- split: test_seed4
path: data/test_seed4-*
- split: train_seed5
path: data/train_seed5-*
- split: validation_seed5
path: data/validation_seed5-*
- split: test_seed5
path: data/test_seed5-*
- split: train_seed6
path: data/train_seed6-*
- split: validation_seed6
path: data/validation_seed6-*
- split: test_seed6
path: data/test_seed6-*
- split: train_seed7
path: data/train_seed7-*
- split: validation_seed7
path: data/validation_seed7-*
- split: test_seed7
path: data/test_seed7-*
- split: train_seed8
path: data/train_seed8-*
- split: validation_seed8
path: data/validation_seed8-*
- split: test_seed8
path: data/test_seed8-*
- split: train_seed9
path: data/train_seed9-*
- split: validation_seed9
path: data/validation_seed9-*
- split: test_seed9
path: data/test_seed9-*
---
|
qfq/eidata_newscript_dev20241102_221115_iter2_5 | qfq | 2024-11-02T23:18:57Z | 16 | 0 | [
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-02T23:18:56Z | 0 | ---
dataset_info:
features:
- name: orig_problem
dtype: string
- name: orig_solution
dtype: string
- name: orig_answer
dtype: string
- name: thinking_trajectory
sequence: string
- name: golden_thinking_trajectory
sequence: string
- name: old_trajectory
dtype: 'null'
- name: labeled_trajectory
dtype: 'null'
splits:
- name: train
num_bytes: 0.0
num_examples: 0
download_size: 2364
dataset_size: 0.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alea-institute/kl3m-filter-data-dotgov-www.acf.hhs.gov | alea-institute | 2025-02-03T21:20:35Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-03T21:20:27Z | 0 | ---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: score
dtype: float64
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 182920620
num_examples: 2405
download_size: 33259403
dataset_size: 182920620
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ziyu3141/rf_newtrain_7_38 | ziyu3141 | 2025-02-07T10:51:15Z | 13 | 0 | [
"size_categories:n<1K",
"modality:tabular",
"modality:text",
"region:us"
] | [] | 2025-02-07T10:51:11Z | 0 | ---
dataset_info:
features:
- name: Filename
dtype: string
- name: Aesthetics score
dtype: float64
- name: Artifact score
dtype: float64
- name: Misalignment score
dtype: float64
- name: Overall score
dtype: float64
- name: Artifact heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment token label
dtype: string
- name: is_uneven
dtype: bool
- name: preferred_image
dtype: binary
- name: unpreferred_image
dtype: binary
- name: revised_image
dtype: binary
- name: revised_id
dtype: string
- name: unrevised_id
dtype: string
- name: is_preferred
dtype: bool
splits:
- name: train
num_bytes: 675743779
num_examples: 100
download_size: 43836364
dataset_size: 675743779
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Archistrax/answers | Archistrax | 2025-05-11T14:36:51Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T14:36:49Z | 0 | ---
dataset_info:
features:
- name: abstraction
dtype: string
- name: social
dtype: string
- name: disrupted
dtype: string
- name: composition
dtype: string
- name: object
dtype: string
splits:
- name: train
num_bytes: 285
num_examples: 3
download_size: 2492
dataset_size: 285
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
colabfit/mlearn_Mo_train | colabfit | 2025-04-23T18:14:17Z | 20 | 0 | [
"license:bsd-3-clause",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"molecular dynamics",
"mlip",
"interatomic potential"
] | [] | 2025-04-01T20:39:46Z | 0 | ---
configs:
- config_name: default
data_files: "main/*.parquet"
license: bsd-3-clause
tags:
- molecular dynamics
- mlip
- interatomic potential
pretty_name: mlearn Mo train
---
# Dataset
mlearn Mo train
### Description
A comprehensive DFT data set was generated for six elements - Li, Mo, Ni, Cu, Si, and Ge. These elements were chosen to span a variety of chemistries (main group metal, transition metal, and semiconductor), crystal structures (bcc, fcc, and diamond) and bonding types (metallic and covalent). This dataset comprises only the Mo configurations
<br>Additional details stored in dataset columns prepended with "dataset_".
### Dataset authors
Yunxing Zuo, Chi Chen, Xiangguo Li, Zhi Deng, Yiming Chen, Jörg Behler, Gábor Csányi, Alexander V. Shapeev, Aidan P. Thompson, Mitchell A. Wood, Shyue Ping Ong
### Publication
https://doi.org/10.1021/acs.jpca.9b08723
### Original data link
https://github.com/materialsvirtuallab/mlearn/tree/master/data
### License
BSD-3-Clause
### Number of unique molecular configurations
194
### Number of atoms
10087
### Elements included
Mo
### Properties included
energy, atomic forces, cauchy stress
### Cite this dataset
Zuo, Y., Chen, C., Li, X., Deng, Z., Chen, Y., Behler, J., Csányi, G., Shapeev, A. V., Thompson, A. P., Wood, M. A., and Ong, S. P. _mlearn Mo train_. ColabFit, 2023. https://doi.org/10.60732/3827e5e1 |
EuskadIA/Erregistro_Zibila_Hiztegia | EuskadIA | 2025-03-29T22:09:08Z | 17 | 0 | [
"task_categories:translation",
"language:eu",
"language:spa",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-01-14T17:52:42Z | 0 | ---
language:
- eu
- spa
- fra
- eng
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.ivap.euskadi.eus/webivap00-a5app3/fr/ac36aEuskaltermWar/publiko/erakutsiBankuEdukia
|
TAUR-dev/qwen2.5_1.5B__2d_retries_eval_fixed__working__first_response_correct__training | TAUR-dev | 2025-05-28T00:31:06Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-28T00:31:04Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1570086.0
num_examples: 1865
download_size: 546207
dataset_size: 1570086.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haorandai/New_Mammal_1Samples_epsilon_0.1_alpha_0.005_With1Constraints | haorandai | 2024-10-03T02:56:08Z | 19 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-03T02:15:13Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 28673.0
num_examples: 2
download_size: 30239
dataset_size: 28673.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anthonyj/numina-deepseek-r1-qwen-7b | anthonyj | 2025-01-28T23:36:18Z | 53 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [] | 2025-01-28T23:36:13Z | 0 | ---
size_categories: n<1K
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generation
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: statistics_text_generation_0
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 796104
num_examples: 40
download_size: 262284
dataset_size: 796104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for numina-deepseek-r1-qwen-7b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/anthonyj/numina-deepseek-r1-qwen-7b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/anthonyj/numina-deepseek-r1-qwen-7b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"distilabel_metadata": {
"raw_input_text_generation_0": [
{
"content": "You will be given a problem. Please reason step by step, and put your final answer within \boxed{}:\nWhat is the coefficient of $x^2y^6$ in the expansion of $\\left(\\frac{3}{5}x-\\frac{y}{2}\\right)^8$? Express your answer as a common fraction.",
"role": "user"
}
],
"raw_output_text_generation_0": "\u003cthink\u003e\nOkay, let me try to figure out the coefficient of \\(x^2 y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\). Hmm, I remember that when expanding a binomial like \\((a + b)^n\\), we can use the binomial theorem. The binomial theorem states that each term in the expansion is of the form \\(\\binom{n}{k} a^{n - k} b^{k}\\), where \\(k\\) ranges from 0 to \\(n\\).\n\nSo, in this problem, our binomial is \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)\\), and we\u0027re raising it to the 8th power. Let me denote \\(a = \\frac{3}{5}x\\) and \\(b = -\\frac{y}{2}\\). Then, the expansion will have terms of the form \\(\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8 - k} \\left(-\\frac{y}{2}\\right)^k\\).\n\nWe need the term where the power of \\(x\\) is 2 and the power of \\(y\\) is 6. Since \\(x\\) is raised to the power of \\(8 - k\\) and \\(y\\) is raised to the power of \\(k\\), we can set up the equation \\(8 - k = 2\\). Solving for \\(k\\), we get \\(k = 6\\). So, the term we\u0027re interested in corresponds to \\(k = 6\\).\n\nNow, let\u0027s compute the coefficient for \\(k = 6\\). The binomial coefficient is \\(\\binom{8}{6}\\). I recall that \\(\\binom{n}{k} = \\binom{n}{n - k}\\), so \\(\\binom{8}{6} = \\binom{8}{2}\\). Calculating \\(\\binom{8}{2}\\), which is \\(\\frac{8 \\times 7}{2 \\times 1} = 28\\).\n\nNext, we need to compute \\(\\left(\\frac{3}{5}\\right)^{8 - 6} = \\left(\\frac{3}{5}\\right)^2\\). Let me calculate that: \\(\\left(\\frac{3}{5}\\right)^2 = \\frac{9}{25}\\).\n\nThen, we have \\(\\left(-\\frac{1}{2}\\right)^6\\). Hmm, raising a negative number to an even power gives a positive result. So, \\(\\left(-\\frac{1}{2}\\right)^6 = \\left(\\frac{1}{2}\\right)^6 = \\frac{1}{64}\\).\n\nNow, multiplying all these together: the coefficient is \\(\\binom{8}{6} \\times \\left(\\frac{3}{5}\\right)^2 \\times \\left(-\\frac{1}{2}\\right)^6 = 28 \\times \\frac{9}{25} \\times \\frac{1}{64}\\).\n\nLet me compute this step by step. First, multiply 28 and \\(\\frac{9}{25}\\). So, \\(28 \\times \\frac{9}{25}\\). Let\u0027s see, 28 divided by 25 is \\(\\frac{28}{25}\\), and then multiplied by 9 gives \\(\\frac{252}{25}\\). \n\nThen, we multiply this result by \\(\\frac{1}{64}\\): \\(\\frac{252}{25} \\times \\frac{1}{64}\\). Multiplying the numerators: 252 \u00d7 1 = 252, and denominators: 25 \u00d7 64 = 1600. So, the coefficient is \\(\\frac{252}{1600}\\).\n\nWait, can we simplify this fraction? Let\u0027s see. Both 252 and 1600 are divisible by 4. Dividing numerator and denominator by 4: 252 \u00f7 4 = 63, and 1600 \u00f7 4 = 400. So, now we have \\(\\frac{63}{400}\\). \n\nIs 63 and 400 divisible by any common number? Well, 63 is 7 \u00d7 9, and 400 is 16 \u00d7 25. They don\u0027t share any common factors besides 1, so \\(\\frac{63}{400}\\) is the simplified fraction.\n\nBut hold on, let me double-check my calculations to make sure I didn\u0027t make a mistake. So, binomial coefficient was 28, which is correct because \\(\\binom{8}{6}\\) is 28. Then, \\(\\left(\\frac{3}{5}\\right)^2 is indeed 9/25, and \\(\\left(-\\frac{1}{2}\\right)^6 is 1/64. Multiplying 28, 9/25, and 1/64:\n\n28 \u00d7 9 = 252, and 25 \u00d7 64 = 1600. So that\u0027s 252/1600. Then simplifying, dividing numerator and denominator by 4 gives 63/400. That seems correct.\n\nBut wait, I just realized that the term is \\(\\left(-\\frac{y}{2}\\right)^6\\), which is positive because 6 is even, so that\u0027s correct. So, the coefficient is positive 63/400. Hmm, that seems right.\n\nLet me think if there\u0027s another way to approach this. Maybe using the multinomial theorem? But no, in this case, since it\u0027s a binomial, the binomial theorem should suffice. Yeah, I think my approach is correct.\n\nAlternatively, I can think about the term in the expansion. The general term is:\n\n\\[\n\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8 - k} \\left(-\\frac{y}{2}\\right)^k\n\\]\n\nWe found that when \\(k = 6\\), the term is:\n\n\\[\n\\binom{8}{6} \\left(\\frac{3}{5}\\right)^{2} \\left(-\\frac{1}{2}\\right)^6 x^2 y^6\n\\]\n\nWhich is exactly what we computed. So, yes, the coefficient is 28 \u00d7 (9/25) \u00d7 (1/64) = 63/400.\n\nWait, but let me make sure about the signs. Since the term is \\(\\left(-\\frac{y}{2}\\right)^6\\), which is positive because any negative number raised to an even power is positive. So, the coefficient is indeed positive 63/400.\n\nJust to be thorough, let me compute 28 \u00d7 9 first. 28 \u00d7 9: 20 \u00d7 9 = 180, 8 \u00d7 9 = 72, so 180 + 72 = 252. Then, 25 \u00d7 64: 25 \u00d7 60 = 1500, 25 \u00d7 4 = 100, so 1500 + 100 = 1600. So, 252/1600 simplifies to 63/400.\n\nAlright, so I think I\u0027m confident that 63/400 is the correct coefficient. I don\u0027t see any errors in my reasoning.\n\n**Final Answer**\nThe coefficient of \\(x^2 y^6\\) is \\boxed{\\dfrac{63}{400}}.\n\u003c/think\u003e\n\nTo find the coefficient of \\(x^2 y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\), we use the binomial theorem. The binomial theorem states that each term in the expansion is of the form \\(\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8 - k} \\left(-\\frac{y}{2}\\right)^k\\).\n\nWe need the term where the power of \\(x\\) is 2 and the power of \\(y\\) is 6. This corresponds to \\(k = 6\\) because \\(8 - k = 2\\).\n\n1. Calculate the binomial coefficient \\(\\binom{8}{6}\\):\n \\[\n \\binom{8}{6} = \\binom{8}{2} = \\frac{8 \\times 7}{2 \\times 1} = 28\n \\]\n\n2. Compute \\(\\left(\\frac{3}{5}\\right)^2\\):\n \\[\n \\left(\\frac{3}{5}\\right)^2 = \\frac{9}{25}\n \\]\n\n3. Compute \\(\\left(-\\frac{1}{2}\\right)^6\\):\n \\[\n \\left(-\\frac{1}{2}\\right)^6 = \\left(\\frac{1}{2}\\right)^6 = \\frac{1}{64}\n \\]\n\n4. Multiply these results together:\n \\[\n 28 \\times \\frac{9}{25} \\times \\frac{1}{64} = 28 \\times \\frac{9}{25} \\times \\frac{1}{64} = \\frac{252}{1600}\n \\]\n\n5. Simplify the fraction \\(\\frac{252}{1600}\\) by dividing both the numerator and the denominator by 4:\n \\[\n \\frac{252}{1600} = \\frac{63}{400}\n \\]\n\nThus, the coefficient of \\(x^2 y^6\\) is \\(\\boxed{\\dfrac{63}{400}}\\).",
"statistics_text_generation_0": {
"input_tokens": 77,
"output_tokens": 2199
}
},
"generation": "\u003cthink\u003e\nOkay, let me try to figure out the coefficient of \\(x^2 y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\). Hmm, I remember that when expanding a binomial like \\((a + b)^n\\), we can use the binomial theorem. The binomial theorem states that each term in the expansion is of the form \\(\\binom{n}{k} a^{n - k} b^{k}\\), where \\(k\\) ranges from 0 to \\(n\\).\n\nSo, in this problem, our binomial is \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)\\), and we\u0027re raising it to the 8th power. Let me denote \\(a = \\frac{3}{5}x\\) and \\(b = -\\frac{y}{2}\\). Then, the expansion will have terms of the form \\(\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8 - k} \\left(-\\frac{y}{2}\\right)^k\\).\n\nWe need the term where the power of \\(x\\) is 2 and the power of \\(y\\) is 6. Since \\(x\\) is raised to the power of \\(8 - k\\) and \\(y\\) is raised to the power of \\(k\\), we can set up the equation \\(8 - k = 2\\). Solving for \\(k\\), we get \\(k = 6\\). So, the term we\u0027re interested in corresponds to \\(k = 6\\).\n\nNow, let\u0027s compute the coefficient for \\(k = 6\\). The binomial coefficient is \\(\\binom{8}{6}\\). I recall that \\(\\binom{n}{k} = \\binom{n}{n - k}\\), so \\(\\binom{8}{6} = \\binom{8}{2}\\). Calculating \\(\\binom{8}{2}\\), which is \\(\\frac{8 \\times 7}{2 \\times 1} = 28\\).\n\nNext, we need to compute \\(\\left(\\frac{3}{5}\\right)^{8 - 6} = \\left(\\frac{3}{5}\\right)^2\\). Let me calculate that: \\(\\left(\\frac{3}{5}\\right)^2 = \\frac{9}{25}\\).\n\nThen, we have \\(\\left(-\\frac{1}{2}\\right)^6\\). Hmm, raising a negative number to an even power gives a positive result. So, \\(\\left(-\\frac{1}{2}\\right)^6 = \\left(\\frac{1}{2}\\right)^6 = \\frac{1}{64}\\).\n\nNow, multiplying all these together: the coefficient is \\(\\binom{8}{6} \\times \\left(\\frac{3}{5}\\right)^2 \\times \\left(-\\frac{1}{2}\\right)^6 = 28 \\times \\frac{9}{25} \\times \\frac{1}{64}\\).\n\nLet me compute this step by step. First, multiply 28 and \\(\\frac{9}{25}\\). So, \\(28 \\times \\frac{9}{25}\\). Let\u0027s see, 28 divided by 25 is \\(\\frac{28}{25}\\), and then multiplied by 9 gives \\(\\frac{252}{25}\\). \n\nThen, we multiply this result by \\(\\frac{1}{64}\\): \\(\\frac{252}{25} \\times \\frac{1}{64}\\). Multiplying the numerators: 252 \u00d7 1 = 252, and denominators: 25 \u00d7 64 = 1600. So, the coefficient is \\(\\frac{252}{1600}\\).\n\nWait, can we simplify this fraction? Let\u0027s see. Both 252 and 1600 are divisible by 4. Dividing numerator and denominator by 4: 252 \u00f7 4 = 63, and 1600 \u00f7 4 = 400. So, now we have \\(\\frac{63}{400}\\). \n\nIs 63 and 400 divisible by any common number? Well, 63 is 7 \u00d7 9, and 400 is 16 \u00d7 25. They don\u0027t share any common factors besides 1, so \\(\\frac{63}{400}\\) is the simplified fraction.\n\nBut hold on, let me double-check my calculations to make sure I didn\u0027t make a mistake. So, binomial coefficient was 28, which is correct because \\(\\binom{8}{6}\\) is 28. Then, \\(\\left(\\frac{3}{5}\\right)^2 is indeed 9/25, and \\(\\left(-\\frac{1}{2}\\right)^6 is 1/64. Multiplying 28, 9/25, and 1/64:\n\n28 \u00d7 9 = 252, and 25 \u00d7 64 = 1600. So that\u0027s 252/1600. Then simplifying, dividing numerator and denominator by 4 gives 63/400. That seems correct.\n\nBut wait, I just realized that the term is \\(\\left(-\\frac{y}{2}\\right)^6\\), which is positive because 6 is even, so that\u0027s correct. So, the coefficient is positive 63/400. Hmm, that seems right.\n\nLet me think if there\u0027s another way to approach this. Maybe using the multinomial theorem? But no, in this case, since it\u0027s a binomial, the binomial theorem should suffice. Yeah, I think my approach is correct.\n\nAlternatively, I can think about the term in the expansion. The general term is:\n\n\\[\n\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8 - k} \\left(-\\frac{y}{2}\\right)^k\n\\]\n\nWe found that when \\(k = 6\\), the term is:\n\n\\[\n\\binom{8}{6} \\left(\\frac{3}{5}\\right)^{2} \\left(-\\frac{1}{2}\\right)^6 x^2 y^6\n\\]\n\nWhich is exactly what we computed. So, yes, the coefficient is 28 \u00d7 (9/25) \u00d7 (1/64) = 63/400.\n\nWait, but let me make sure about the signs. Since the term is \\(\\left(-\\frac{y}{2}\\right)^6\\), which is positive because any negative number raised to an even power is positive. So, the coefficient is indeed positive 63/400.\n\nJust to be thorough, let me compute 28 \u00d7 9 first. 28 \u00d7 9: 20 \u00d7 9 = 180, 8 \u00d7 9 = 72, so 180 + 72 = 252. Then, 25 \u00d7 64: 25 \u00d7 60 = 1500, 25 \u00d7 4 = 100, so 1500 + 100 = 1600. So, 252/1600 simplifies to 63/400.\n\nAlright, so I think I\u0027m confident that 63/400 is the correct coefficient. I don\u0027t see any errors in my reasoning.\n\n**Final Answer**\nThe coefficient of \\(x^2 y^6\\) is \\boxed{\\dfrac{63}{400}}.\n\u003c/think\u003e\n\nTo find the coefficient of \\(x^2 y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\), we use the binomial theorem. The binomial theorem states that each term in the expansion is of the form \\(\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8 - k} \\left(-\\frac{y}{2}\\right)^k\\).\n\nWe need the term where the power of \\(x\\) is 2 and the power of \\(y\\) is 6. This corresponds to \\(k = 6\\) because \\(8 - k = 2\\).\n\n1. Calculate the binomial coefficient \\(\\binom{8}{6}\\):\n \\[\n \\binom{8}{6} = \\binom{8}{2} = \\frac{8 \\times 7}{2 \\times 1} = 28\n \\]\n\n2. Compute \\(\\left(\\frac{3}{5}\\right)^2\\):\n \\[\n \\left(\\frac{3}{5}\\right)^2 = \\frac{9}{25}\n \\]\n\n3. Compute \\(\\left(-\\frac{1}{2}\\right)^6\\):\n \\[\n \\left(-\\frac{1}{2}\\right)^6 = \\left(\\frac{1}{2}\\right)^6 = \\frac{1}{64}\n \\]\n\n4. Multiply these results together:\n \\[\n 28 \\times \\frac{9}{25} \\times \\frac{1}{64} = 28 \\times \\frac{9}{25} \\times \\frac{1}{64} = \\frac{252}{1600}\n \\]\n\n5. Simplify the fraction \\(\\frac{252}{1600}\\) by dividing both the numerator and the denominator by 4:\n \\[\n \\frac{252}{1600} = \\frac{63}{400}\n \\]\n\nThus, the coefficient of \\(x^2 y^6\\) is \\(\\boxed{\\dfrac{63}{400}}\\).",
"messages": [
{
"content": "What is the coefficient of $x^2y^6$ in the expansion of $\\left(\\frac{3}{5}x-\\frac{y}{2}\\right)^8$? Express your answer as a common fraction.",
"role": "user"
},
{
"content": "To determine the coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\), we can use the binomial theorem.\n\nThe binomial theorem states:\n\\[\n(a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n-k} b^k\n\\]\n\nIn this case, \\(a = \\frac{3}{5}x\\), \\(b = -\\frac{y}{2}\\), and \\(n = 8\\).\n\nWe are interested in the term that contains \\(x^2y^6\\). In the general term of the binomial expansion:\n\\[\n\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8-k} \\left(-\\frac{y}{2}\\right)^k\n\\]\n\nTo get \\(x^2\\), we need \\(8 - k = 2\\), thus \\(k = 6\\).\n\nSubstituting \\(k = 6\\) into the expression:\n\\[\n\\binom{8}{6} \\left(\\frac{3}{5}x\\right)^{8-6} \\left(-\\frac{y}{2}\\right)^6 = \\binom{8}{6} \\left(\\frac{3}{5}x\\right)^2 \\left(-\\frac{y}{2}\\right)^6\n\\]\n\nNow, we will compute each part of this expression.\n\n1. Calculate the binomial coefficient \\(\\binom{8}{6}\\).\n2. Compute \\(\\left(\\frac{3}{5}\\right)^2\\).\n3. Compute \\(\\left(-\\frac{y}{2}\\right)^6\\).\n4. Combine everything together to get the coefficient of \\(x^2y^6\\).\n\nLet\u0027s compute these in Python.\n```python\nfrom math import comb\n\n# Given values\nn = 8\nk = 6\n\n# Calculate the binomial coefficient\nbinom_coeff = comb(n, k)\n\n# Compute (3/5)^2\na_term = (3/5)**2\n\n# Compute (-1/2)^6\nb_term = (-1/2)**6\n\n# Combine terms to get the coefficient of x^2y^6\ncoefficient = binom_coeff * a_term * b_term\nprint(coefficient)\n```\n```output\n0.1575\n```\nThe coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\) is \\(0.1575\\). To express this as a common fraction, we recognize that:\n\n\\[ 0.1575 = \\frac{1575}{10000} = \\frac{63}{400} \\]\n\nThus, the coefficient can be expressed as:\n\n\\[\n\\boxed{\\frac{63}{400}}\n\\]",
"role": "assistant"
}
],
"model_name": "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"problem": "What is the coefficient of $x^2y^6$ in the expansion of $\\left(\\frac{3}{5}x-\\frac{y}{2}\\right)^8$? Express your answer as a common fraction.",
"solution": "To determine the coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\), we can use the binomial theorem.\n\nThe binomial theorem states:\n\\[\n(a + b)^n = \\sum_{k=0}^{n} \\binom{n}{k} a^{n-k} b^k\n\\]\n\nIn this case, \\(a = \\frac{3}{5}x\\), \\(b = -\\frac{y}{2}\\), and \\(n = 8\\).\n\nWe are interested in the term that contains \\(x^2y^6\\). In the general term of the binomial expansion:\n\\[\n\\binom{8}{k} \\left(\\frac{3}{5}x\\right)^{8-k} \\left(-\\frac{y}{2}\\right)^k\n\\]\n\nTo get \\(x^2\\), we need \\(8 - k = 2\\), thus \\(k = 6\\).\n\nSubstituting \\(k = 6\\) into the expression:\n\\[\n\\binom{8}{6} \\left(\\frac{3}{5}x\\right)^{8-6} \\left(-\\frac{y}{2}\\right)^6 = \\binom{8}{6} \\left(\\frac{3}{5}x\\right)^2 \\left(-\\frac{y}{2}\\right)^6\n\\]\n\nNow, we will compute each part of this expression.\n\n1. Calculate the binomial coefficient \\(\\binom{8}{6}\\).\n2. Compute \\(\\left(\\frac{3}{5}\\right)^2\\).\n3. Compute \\(\\left(-\\frac{y}{2}\\right)^6\\).\n4. Combine everything together to get the coefficient of \\(x^2y^6\\).\n\nLet\u0027s compute these in Python.\n```python\nfrom math import comb\n\n# Given values\nn = 8\nk = 6\n\n# Calculate the binomial coefficient\nbinom_coeff = comb(n, k)\n\n# Compute (3/5)^2\na_term = (3/5)**2\n\n# Compute (-1/2)^6\nb_term = (-1/2)**6\n\n# Combine terms to get the coefficient of x^2y^6\ncoefficient = binom_coeff * a_term * b_term\nprint(coefficient)\n```\n```output\n0.1575\n```\nThe coefficient of \\(x^2y^6\\) in the expansion of \\(\\left(\\frac{3}{5}x - \\frac{y}{2}\\right)^8\\) is \\(0.1575\\). To express this as a common fraction, we recognize that:\n\n\\[ 0.1575 = \\frac{1575}{10000} = \\frac{63}{400} \\]\n\nThus, the coefficient can be expressed as:\n\n\\[\n\\boxed{\\frac{63}{400}}\n\\]"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("anthonyj/numina-deepseek-r1-qwen-7b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("anthonyj/numina-deepseek-r1-qwen-7b")
```
</details>
|
demeguyessa/Dame | demeguyessa | 2025-06-24T16:49:11Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-06-24T16:49:11Z | 0 | ---
license: apache-2.0
---
|
sghosts/rand-tezler-firstlast20-little-sample_alpFirst20pages2pdf | sghosts | 2025-03-08T13:35:45Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-08T13:35:43Z | 0 | ---
dataset_info:
features:
- name: subdir
dtype: string
- name: pdf_path
dtype: string
- name: page_num
dtype: int64
- name: image
dtype: image
- name: alpFirst20pages2pdf
dtype: int64
splits:
- name: train
num_bytes: 6780515.0
num_examples: 80
download_size: 6761744
dataset_size: 6780515.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ben10x/lvm_bird_spectrogram | Ben10x | 2025-04-08T20:31:43Z | 39 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-08T20:27:54Z | 0 | ---
dataset_info:
features:
- name: input_image
dtype: image
- name: output_image
dtype: binary
splits:
- name: train
num_bytes: 24982701187.125
num_examples: 61375
- name: test
num_bytes: 4471537621.0
num_examples: 10832
download_size: 4725229980
dataset_size: 29454238808.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dive-lab/ShockCast | dive-lab | 2025-06-08T02:41:10Z | 0 | 0 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | [] | 2025-06-08T02:21:43Z | 0 | ---
license: cc-by-nc-sa-4.0
---
|
infinite-dataset-hub/PregnancyBotConvo | infinite-dataset-hub | 2025-02-05T18:00:16Z | 12 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2024-11-22T19:00:27Z | 0 | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# PregnancyBotConvo
tags: ConversationalAI, Pregnancy, Healthcare
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'PregnancyBotConvo' dataset is designed for training conversational AI models that are focused on providing assistance and information to individuals who are expecting a baby. The dataset consists of various conversational excerpts that deal with common queries and concerns during pregnancy. Each row in the dataset represents a single interaction with a user seeking information or advice. The dataset is tagged with labels to categorize the type of inquiry or advice provided.
**CSV Content Preview:**
```csv
"dialogue_id","user_input","bot_response","label"
"001","What are some good exercises during pregnancy?","Certain exercises can help maintain fitness and prepare your body for labor. It's best to consult with your healthcare provider before starting any new exercise routine during pregnancy.","Exercise_Advice"
"002","I'm worried about morning sickness. Any tips?","There are several strategies to help manage morning sickness, such as eating small, frequent meals, avoiding strong odors, and ginger can be soothing. Always speak to your healthcare provider for personalized advice.","Morning_Sickness_Advice"
"003","What should I know about prenatal vitamins?","Prenatal vitamins are crucial for your baby's development and can help prevent certain birth defects. Make sure to choose a reputable brand and discuss with your healthcare provider to find the best option for you.","Prenatal_Vitamins_Info"
"004","How can I prepare for a natural birth?","Preparing for a natural birth involves understanding your options and working with your healthcare provider. You might consider attending childbirth education classes, creating a birth plan, and practicing relaxation techniques.","Birth_Preparation_Advice"
"005","I'm feeling anxious about childbirth. What should I do?","Feeling anxious about childbirth is common. It's important to communicate your feelings with your healthcare provider. They can provide support and refer you to counseling services if necessary.","Anxiety_Relief_Advice"
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query '':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=&dataset=PregnancyBotConvo&tags=ConversationalAI,+Pregnancy,+Healthcare
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
Ehsanrs2/Forex_Factory_Calendar | Ehsanrs2 | 2025-04-14T11:03:04Z | 89 | 2 | [
"task_categories:time-series-forecasting",
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"forex",
"economic-calendar",
"financial-data",
"macro-economics",
"trading-strategy",
"time-series",
"market-events",
"selenium",
"pandas",
"scraper"
] | [
"time-series-forecasting",
"tabular-classification",
"tabular-regression"
] | 2025-04-14T10:29:10Z | 0 | ---
license: mit
language:
- en
task_categories:
- time-series-forecasting
- tabular-classification
- tabular-regression
tags:
- forex
- economic-calendar
- financial-data
- macro-economics
- trading-strategy
- time-series
- market-events
- selenium
- pandas
- scraper
pretty_name: Forex Factory Economic Calendar (2007–2025)
size_categories:
- 10K<n<100K
---
# 📅 Forex Factory Economic Calendar Dataset (2007-01-01 to 2025-04-07)
This dataset contains a comprehensive archive of macroeconomic calendar events sourced from [Forex Factory](https://www.forexfactory.com/calendar), spanning from **January 1, 2007** to **April 7, 2025**.
Each row captures a specific event with detailed metadata including currency, event type, market impact level, reported values, and descriptive context.
---
## 📦 Dataset Summary
- **Total timespan:** 2007-01-01 → 2025-04-07
- **Format:** CSV (UTF-8)
- **Timezone:** Asia/Tehran
- **Detailed fields included**
---
## 🧾 Columns
| Column | Description |
|-------------|-----------------------------------------------------------------------------|
| `DateTime` | Date and time of the event in ISO 8601 format |
| `Currency` | Currency affected by the event (e.g., USD, EUR, GBP) |
| `Impact` | Impact level: Low, Medium, or High Impact Expected |
| `Event` | Name of the economic event |
| `Actual` | Reported value of the indicator (if available) |
| `Forecast` | Forecasted value |
| `Previous` | Previously reported value |
| `Detail` | Rich description including source, frequency, and why it matters to traders |
---
## 🔍 Example
```csv
2007-01-01T04:30:00+03:30,CNY,High Impact Expected,Manufacturing PMI,54.8,,55.3,"Source: CFLP (latest release) | Measures: Level of a diffusion index based on surveyed purchasing managers..."
```
---
## 💼 Use Cases
- Financial time series forecasting
- Event-driven backtesting in algorithmic trading
- Economic policy impact analysis
- Macro indicators modeling for global trends
- Educational purposes in finance and economics
---
## ⚙️ How It Was Built
The data was collected using a custom Python scraper using:
- [Selenium](https://pypi.org/project/selenium/) (via `undetected-chromedriver`)
- [pandas](https://pandas.pydata.org/) for CSV export and manipulation
- Incremental scraping and detailed event parsing
- Timezone-aware timestamping
GitHub repository (scraper code): [🔗 Link to repository](https://github.com/ehsanrs2/forexfactory-scraper)
---
## 🛡 Disclaimer
This dataset is intended for **educational and research purposes only**.
Data was collected from [Forex Factory](https://www.forexfactory.com/) using publicly accessible web content.
This project is **not affiliated, endorsed, or sponsored by Forex Factory** in any way.
Please respect [Forex Factory’s Terms of Use](https://www.forexfactory.com/notices). Do not use this dataset for commercial purposes or redistribution without checking the original source's policies.
---
## 📚 Citation
If you use this dataset in your work, please cite it as:
```
@misc{forexcalendar2025,
title={Forex Factory Economic Calendar Dataset (2007–2025)},
author={Ehsan},
year={2025},
howpublished={\url{https://huggingface.co/datasets/yourusername/forex-factory-calendar}},
note={Accessed: YYYY-MM-DD}
}
```
---
## 🏷 Tags
`forex` `financial-data` `economic-calendar` `macro-economics`
`time-series` `trading-strategy` `pandas` `selenium` `market-events`
```
|
zpc2090/charades_8frames_new | zpc2090 | 2025-03-09T12:34:10Z | 12 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-09T12:31:48Z | 0 | ---
dataset_info:
features:
- name: images
sequence: image
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1178315908.0
num_examples: 4000
- name: validation
num_bytes: 143995676.0
num_examples: 500
- name: test
num_bytes: 147022601.0
num_examples: 500
download_size: 520776598
dataset_size: 1469334185.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
carminho/piqa-mt-pt | carminho | 2025-06-05T16:45:51Z | 0 | 0 | [
"language:pt",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-05T12:57:09Z | 0 | ---
language:
- pt
configs:
- config_name: default
data_files:
- split: train
path: piqa_train_pt.jsonl
- split: test
path: piqa_test_pt.jsonl
- split: validation
path: piqa_validation_pt.jsonl
--- |
1231czx/dstrain_orm_2e6_bz128_head_test_ms_math | 1231czx | 2024-12-04T10:17:01Z | 54 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-04T10:16:57Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answers
sequence: string
- name: rewards
sequence: float64
- name: label
sequence: int64
splits:
- name: train
num_bytes: 318457011
num_examples: 500
download_size: 71900257
dataset_size: 318457011
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deutsche-telekom/Ger-RAG-eval | deutsche-telekom | 2024-08-23T11:10:52Z | 192 | 45 | [
"language:de",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-04-07T19:45:49Z | 2 | ---
license: cc-by-sa-4.0
language:
- de
size_categories:
- 1K<n<10K
configs:
- config_name: task1
data_files:
- split: test
path: "task1_test.parquet"
- config_name: task2
data_files:
- split: test
path: "task2_test.parquet"
- config_name: task3
data_files:
- split: test
path: "task3_test.parquet"
- config_name: task4
data_files:
- split: test
path: "task4_test.parquet"
---
# German RAG LLM Evaluation Dataset
This dataset is intended for the evaluation of German RAG (retrieval augmented generation) capabilities of LLM models.
It is based on the test set of the [deutsche-telekom/wikipedia-22-12-de-dpr](https://huggingface.co/datasets/deutsche-telekom/wikipedia-22-12-de-dpr)
data set (also see [wikipedia-22-12-de-dpr on GitHub](https://github.com/telekom/wikipedia-22-12-de-dpr)) and
consists of 4 subsets or tasks.
## Task Description
The dataset consists of 4 subsets for the following 4 tasks (each task with 1000 prompts):
### choose_context_by_question (subset task2)
Given is a question and 4 contexts. The task is to decide which context can answer the question.
Example:
```text
Auf Basis welcher der folgenden Kontexte (A oder B oder C oder D) lässt sich die Frage beantworten?
Frage: Wie wurde Saidi im Januar 2006 noch einmal deutscher Meister?
Kontexte:
A:
Ceferino Garcia (* 26. August 1906 in Naval, Biliran; † 1. Januar 1981 in San Diego, Kalifornien, Vereinigte Staaten) war ein philippinischer Boxer im Mittelgewicht. Der von den Philippinen stammende Garcia, der nach anderen Angaben bereits um 1903 geboren wurde, begann seine Karriere als Boxer 1923 zunächst im Weltergewicht und gewann am 2. Oktober 1939 den Weltmeistertitel der NYSAC im Mittelgewicht der "International Boxing Union" bei einem Kampf gegen Fred Apostoli in New York City, den er in den siebten Runde durch ein Knockout. Am 23. Dezember 1939 verteidigte er seinen Titel in Manila gegen Glen Lee durch ein technisches K.O. Sein Sieg im Mittelgewichtstitelkampf am 1. März 1940 gegen Henry Armstrong, gegen den er im Weltergewicht schon mal verloren hatte, gilt als Fehlurteil. 1945 beendete er seine Karriere nach 18 Jahren, wobei er 67 Mal durch KO gewann sowie weitere 24 Mal durch Punkteentscheidung. Garcia wurde besonders durch seinen Kampfstil bekannt und dem von ihm verwendeten sogenannten „Bolo Punch“, den er wie einen Aufwärtshaken anwendete. Einer seiner Coachs war Ray Arcel.
B:
Ernst Stimmel (* 23. März 1891 in Hamburg; † 28. März 1978 in Reichenau) war ein deutscher Schauspieler und Autor. Nach Abitur und Studium wurde Ernst Stimmel 1919 in München mit der Dissertation "Einfluß der Schopenhauerschen Philosophie auf Wilhelm Raabe" promoviert. In den 1930er und 1940er Jahren wirkte er in vielen Filmproduktionen als Darsteller überwiegend in Nebenrollen mit. Darunter befanden sich die nationalsozialistischen Propagandafilme "Jud Süß", "Die Rothschilds" und "Kampfgeschwader Lützow", die heute in Deutschland als Vorbehaltsfilme nur unter bestimmten Voraussetzungen aufgeführt werden können. Ernst Stimmel spielte aber auch in Unterhaltungs- und Historienfilmen wie "Der Gasmann" mit Heinz Rühmann, "Der große König" mit Otto Gebühr und "Die Entlassung" mit Emil Jannings. Zudem war er an dem Film "Zwischen Herz und Gewissen" beteiligt, der als Überläufer erst im Jahr 1951 uraufgeführt wurde, obwohl dieser kurz vor Ende des Zweiten Weltkriegs noch unter dem Titel "Das fremde Leben" fertiggestellt wurde.
C:
Saidis Laufbahn als Berufsboxer begann mit einem Kampf im November 1989, seinen letzten Kampf bestritt er im Dezember 2006. Im Mai 1990 gewann er gegen Andreas Schweiger die internationale deutsche Meisterschaft im Halbschwergewicht und wurde im Juni 1990 deutscher Schwergewichtsmeister. Im November 1992 wurde Saidi durch einen Sieg über Rund Kanika aus dem Kongo Afrikameister im Halbschwergewicht. Er musste den internationalen deutschen Meistertitel abgegeben, nachdem er im Februar 1993 gegen Dariusz Michalczewski verloren hatte. Saidi wurde im April 1994 Weltmeister im Halbschwergewicht nach Version der WBF. Er sicherte sich Ende Januar 1997 den deutschen Meistertitel im Halbschwergewicht, diesen verlor er im Dezember desselben Jahres wieder, als er gegen Sven Ottke verlor. Im Februar 1999 boxte Saidi wieder um die deutsche Meisterschaft im Halbschwergewicht, verlor aber gegen Thomas Ulrich. Anschließend legte er eine jahrelange Pause ein, im Mai 2005 kehrte Saidi in den Ring zurück. Noch einmal deutscher Meister, diesmal im Cruisergewicht, wurde er im Januar 2006 durch einen Sieg über Mario Stein.
D:
Uwe Boegelsack (* 2. Dezember 1939 in Gommern; † 28. Januar 2017) war ein deutscher Politiker der Sozialistischen Einheitspartei Deutschlands (SED) in der Deutschen Demokratischen Republik (DDR). Er war von 1984 bis 1987 stellvertretender Minister für Elektrotechnik und Elektronik und von 1987 bis 1990 Generaldirektor des "VEB Kombinat Rundfunk und Fernsehen Staßfurt". Boegelsack, Sohn eines Angestellten, arbeitete nach dem Abitur 1958 als Stanzer und Hilfsarbeiter und wurde 1961 Setzer und Feiler.
```
### choose_question_by_context (subset task1)
Given is a context and 4 questions. The task is to decide which question can be answered by the context.
Example:
```text
Welche der folgenden Fragen (A oder B oder C oder D) lässt sich anhand des Kontext beantworten?
Kontext:
Lsjbot ist ein von Lars Sverker Johansson (Akronym "Lsj") betriebener Bot, der aus digitalen Informationsquellen und Datenbanken kurze Wikipedia-Artikel („Stubs“) in schwedischer Sprache sowie in Cebuano und Wáray-Wáray, zwei auf den Philippinen gesprochenen Sprachen, generierte. Am 15. Juni 2013 überschritt die schwedischsprachige Wikipedia durch einen von Lsjbot erstellten Artikel über die Schmetterlingsart "Erysichton elaborata" die Schwelle von einer Million Artikeln. Zu diesem Zeitpunkt war rund die Hälfte des Artikelbestands der schwedischen Wikipedia botgeneriert. Etwa ein Drittel der von Lsjbot erstellten Artikel wurden für die schwedische Wikipedia erstellt. Im August 2013 erzeugte Lsjbot mit etwa täglich 7200 Artikeln für die schwedische Wikipedia die meisten Artikel pro Tag für eine Wikipedia. Laut "The Wall Street Journal" hatte Lsjbot im Juli 2014 bereits rund 2,7 Millionen Artikel in Wikipedia eingestellt, was zu dieser Zeit etwa 8,5 Prozent des gesamten Bestandes der Wikipedia entsprach. Für die Artikelproduktion griff Lsjbot auf Datenbanken wie den Catalogue of Life zu, wobei offenbar veraltete Offline-Kopien genutzt wurden.
Fragen:
A: Welche Schmetterlingsart wurde durch einen von Lsjbot erstellten Artikel bekannt?
B: Welche Partei stand der Hannoverschen Landeszeitung nahe?
C: In welchem Jahr wurde die Anwendungssoftware erstmals erstellt?
D: Wo werden die Server der Enciclopedia Libre Universal en Español betrieben?
```
### context_question_match (subset task4)
Given is a context and a question. The task is to decide whether the question can be answered by the context or not.
Example:
```text
Lässt sich die Frage mithilfe der Informationen aus dem Kontext beantworten? Antworte mit J für ja oder N für nein.
Kontext:
Oren Koules (* 31. Januar 1961 in La Grange, Illinois) ist ein ehemaliger US-amerikanischer Eishockeyspieler und jetziger -funktionär, sowie Filmproduzent. Bekannt wurde er vor allem durch die Filmreihe Saw, die von seiner Produktionsfirma produziert wird. Oren Koules begann seine Karriere als Eishockeyspieler in der kanadischen Juniorenliga Western Hockey League, in der er von 1979 bis 1982 für die Portland Winter Hawks, Great Falls Americans, Medicine Hat Tigers, Spokane Flyers, Calgary Wranglers und Brandon Wheat Kings aktiv war. Bei den Great Falls Americans, die vorzeitig in ihrer Premierensaison den Spielbetrieb einstellten, hält er mit neun Treffern den Rekord als bester Torschütze in der Franchise-Geschichte. Gegen Ende der Saison 1981/82 bestritt der Flügelspieler zudem ein Spiel für die Saginaw Gears in der International Hockey League.
Die Frage: Bei welchem Verein war Thomas Kleine zweieinhalb Jahre Kapitän?
```
### question_answer_match (subset task3)
Given is a question and an answer. The task is to decide whether the answer actualy answers the question.
Example:
```text
Beantwortet die Antwort wirklich die Frage? Antworte mit J für ja oder N für nein.
Die Frage: Mit welchem Unternehmen fusionierte die Adesso AG im Jahr 2006?
Die Antwort: Bruno Zumino erwarb sein Physik-Diplom an der Universität Rom im Jahr 1945.
```
## Usage
This evaluation task is implemented in [LightEval](https://github.com/huggingface/lighteval):
- <https://github.com/huggingface/lighteval/blob/main/community_tasks/german_rag_evals.py>
- <https://github.com/huggingface/lighteval/blob/main/examples/tasks/all_german_rag_evals.txt>
<span style="color:red;font-weight:bold">Important note:</span>
Unfortunately, after the implementation in LightEval,
we found another weakness in the prompt.
This was fixed by 23. April 2024 in PR
[Fix prompt format german rag community task #171](https://github.com/huggingface/lighteval/pull/171).
Each evaluation done before this date should be carried out again.
Unfortunately, LightEval does not yet support versioning of tasks.
This will hopefully be added later
(see also [Version of a task should be configurable. #172](https://github.com/huggingface/lighteval/issues/172)).
To run the tests on a multi GPU setup, you must first be in the LightEval root directory.
It can be run by:
```bash
# this is a hack to avoid a bug while loading the dataset
# see https://github.com/huggingface/lighteval/issues/278
export HF_DATASETS_TRUST_REMOTE_CODE=TRUE
export MODEL_NAME="DiscoResearch/DiscoLM_German_7b_v1"
accelerate launch --multi_gpu --num_processes=2 -m \
lighteval accelerate \
--model_args "pretrained=$MODEL_NAME,model_parallel=True,trust_remote_code=True" \
--use_chat_template \
--override_batch_size 1 \
--tasks "examples/tasks/all_german_rag_evals.txt" \
--custom_tasks "community_tasks/german_rag_evals.py" \
--output_dir="./evals/"
```
## Results
The evaluation was carried out with [LightEval](https://github.com/huggingface/lighteval) and
the `--use_chat_template --override_batch_size 1` options.
For details see [telekom/llm_evaluation_results](https://github.com/telekom/llm_evaluation_results).
| model name | choose context by question acc | choose question by context acc | context question match acc | question answer match acc | all acc | all acc stderr |
|:---------------------------------------------------------|---------------------------------:|---------------------------------:|-----------------------------:|----------------------------:|----------:|-----------------:|
| mistralai/Mixtral-8x22B-Instruct-v0.1 | 0.998 | 1 | 0.967 | 0.986 | 0.98775 | 0.00269564 |
| VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct | 0.998 | 1 | 0.973 | 0.949 | 0.98 | 0.0033755 |
| VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct | 0.953 | 0.998 | 0.975 | 0.974 | 0.975 | 0.00452096 |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 0.94 | 0.998 | 0.973 | 0.973 | 0.971 | 0.00479586 |
| meta-llama/Meta-Llama-3-70B-Instruct | 0.94 | 1 | 0.974 | 0.946 | 0.965 | 0.00492486 |
| microsoft/Phi-3-mini-4k-instruct | 0.847 | 0.998 | 0.965 | 0.964 | 0.9435 | 0.00612787 |
| VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct | 0.928 | 0.824 | 0.982 | 0.906 | 0.91 | 0.00841656 |
| meta-llama/Meta-Llama-3-8B-Instruct | 0.725 | 0.855 | 0.977 | 0.943 | 0.875 | 0.00933624 |
| DiscoResearch/DiscoLM_German_7b_v1 | 0.625 | 0.991 | 0.914 | 0.927 | 0.86425 | 0.0088514 |
| occiglot/occiglot-7b-de-en-instruct | 0.343 | 0.994 | 0.863 | 0.969 | 0.79225 | 0.00845623 |
| occiglot/occiglot-7b-eu5-instruct | 0.722 | 0.982 | 0.587 | 0.814 | 0.77625 | 0.0115674 |
| LeoLM/leo-mistral-hessianai-7b-chat | 0.865 | 0.949 | 0.735 | 0.52 | 0.76725 | 0.0118855 |
| occiglot/occiglot-7b-de-en | 0.453 | 0.698 | 0.501 | 0.5 | 0.538 | 0.0154785 |
| DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental | 0.303 | 0.28 | 0.751 | 0.594 | 0.482 | 0.0144911 |
| occiglot/occiglot-7b-eu5 | 0.327 | 0.582 | 0.5 | 0.5 | 0.47725 | 0.0155215 |
The evaluation results for the first (old) prompt version can be seen here:
<https://huggingface.co/datasets/deutsche-telekom/Ger-RAG-eval/blob/main/old-eval.md>
## Licensing
The Wikipedia texts are licensed under [CC BY-SA 4.0 Deed](https://creativecommons.org/licenses/by-sa/4.0/deed)
by the corresponding authors of the [German Wikipedia](https://de.wikipedia.org/).\
The questions and answers are copyright ([CC BY-SA 4.0 Deed](https://creativecommons.org/licenses/by-sa/4.0/deed)) by
[Philip May](https://philipmay.org), [Deutsche Telekom AG](https://www.telekom.de/).
|
cfrench/matcha_stir | cfrench | 2025-04-20T18:29:46Z | 72 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-04-20T15:44:47Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# matcha_stir
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
laolaorkk/stage1_sampled | laolaorkk | 2025-04-10T11:18:07Z | 13 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T11:11:48Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: solution
dtype: string
- name: difficulty
dtype: float64
splits:
- name: train
num_bytes: 1306582
num_examples: 1000
download_size: 693439
dataset_size: 1306582
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Upstboy/2234234 | Upstboy | 2025-06-24T22:36:02Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-24T21:41:02Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# 2234234
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
Majinuub/Resume_Parsing | Majinuub | 2025-03-03T18:09:16Z | 41 | 2 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-03T15:23:03Z | 0 | ---
license: apache-2.0
---
|
AmarHelio/record-test13 | AmarHelio | 2025-06-15T03:17:44Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-15T03:14:49Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 2,
"total_frames": 3013,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
neuralnets/letter_bench | neuralnets | 2025-02-20T15:59:11Z | 22 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-20T15:58:14Z | 0 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: int64
splits:
- name: data
num_bytes: 2422854.0956118037
num_examples: 50000
download_size: 817336
dataset_size: 2422854.0956118037
configs:
- config_name: default
data_files:
- split: data
path: data/data-*
---
|
RyanYr/reflect_mini8bSFTt2_mini8BSFTt1_om2g8kom2AG40k_iPSDP_it1_t02_correct | RyanYr | 2025-03-26T01:52:42Z | 68 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-30T20:34:22Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: comment
dtype: string
splits:
- name: train
num_bytes: 308256414.0
num_examples: 112647
download_size: 113449184
dataset_size: 308256414.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen0_run0_llama2-7b_sciabs_doc1000_real64_synt64_vuw | dgambettavuw | 2024-12-23T03:52:24Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-23T03:52:21Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 742290
num_examples: 1000
download_size: 384889
dataset_size: 742290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
techiaith/macsen_intent_parsing | techiaith | 2025-03-31T11:41:00Z | 22 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T11:40:59Z | 0 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 46290
num_examples: 350
download_size: 13437
dataset_size: 46290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_26 | ZixuanKe | 2024-11-23T01:27:25Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-23T01:27:24Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 272952
num_examples: 49
download_size: 42683
dataset_size: 272952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/multi-hatecheck | mteb | 2025-05-04T16:08:10Z | 33 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-annotated",
"multilinguality:multilingual",
"language:ara",
"language:cmn",
"language:deu",
"language:eng",
"language:fra",
"language:hin",
"language:ita",
"language:nld",
"language:pol",
"language:por",
"language:spa",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"modality:text",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-04-19T20:47:19Z | 0 | ---
annotations_creators:
- expert-annotated
language:
- ara
- cmn
- deu
- eng
- fra
- hin
- ita
- nld
- pol
- por
- spa
license: cc-by-4.0
multilinguality: multilingual
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
configs:
- config_name: default
data_files:
- path: test/*.jsonl.gz
split: test
- config_name: hin
data_files:
- path: multi-hatecheck/test/hin.jsonl.gz
split: test
- config_name: spa
data_files:
- path: multi-hatecheck/test/spa.jsonl.gz
split: test
- config_name: pol
data_files:
- path: multi-hatecheck/test/pol.jsonl.gz
split: test
- config_name: eng
data_files:
- path: multi-hatecheck/test/eng.jsonl.gz
split: test
- config_name: fra
data_files:
- path: multi-hatecheck/test/fra.jsonl.gz
split: test
- config_name: nld
data_files:
- path: multi-hatecheck/test/nld.jsonl.gz
split: test
- config_name: ita
data_files:
- path: multi-hatecheck/test/ita.jsonl.gz
split: test
- config_name: deu
data_files:
- path: multi-hatecheck/test/deu.jsonl.gz
split: test
- config_name: ara
data_files:
- path: multi-hatecheck/test/ara.jsonl.gz
split: test
- config_name: por
data_files:
- path: multi-hatecheck/test/por.jsonl.gz
split: test
- config_name: cmn
data_files:
- path: multi-hatecheck/test/cmn.jsonl.gz
split: test
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MultiHateClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Hate speech detection dataset with binary
(hateful vs non-hateful) labels. Includes 25+ distinct types of hate
and challenging non-hate, and 11 languages.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Constructed, Written |
| Reference | https://aclanthology.org/2022.woah-1.15/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["MultiHateClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{rottger-etal-2021-hatecheck,
abstract = {Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.},
address = {Online},
author = {R{\"o}ttger, Paul and
Vidgen, Bertie and
Nguyen, Dong and
Waseem, Zeerak and
Margetts, Helen and
Pierrehumbert, Janet},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
doi = {10.18653/v1/2021.acl-long.4},
editor = {Zong, Chengqing and
Xia, Fei and
Li, Wenjie and
Navigli, Roberto},
month = aug,
pages = {41--58},
publisher = {Association for Computational Linguistics},
title = {{H}ate{C}heck: Functional Tests for Hate Speech Detection Models},
url = {https://aclanthology.org/2021.acl-long.4},
year = {2021},
}
@inproceedings{rottger-etal-2022-multilingual,
abstract = {Hate speech detection models are typically evaluated on held-out test sets. However, this risks painting an incomplete and potentially misleading picture of model performance because of increasingly well-documented systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, recent research has thus introduced functional tests for hate speech detection models. However, these tests currently only exist for English-language content, which means that they cannot support the development of more effective models in other languages spoken by billions across the world. To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models. MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset. To illustrate MHC{'}s utility, we train and test a high-performing multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.},
address = {Seattle, Washington (Hybrid)},
author = {R{\"o}ttger, Paul and
Seelawi, Haitham and
Nozza, Debora and
Talat, Zeerak and
Vidgen, Bertie},
booktitle = {Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)},
doi = {10.18653/v1/2022.woah-1.15},
editor = {Narang, Kanika and
Mostafazadeh Davani, Aida and
Mathias, Lambert and
Vidgen, Bertie and
Talat, Zeerak},
month = jul,
pages = {154--169},
publisher = {Association for Computational Linguistics},
title = {Multilingual {H}ate{C}heck: Functional Tests for Multilingual Hate Speech Detection Models},
url = {https://aclanthology.org/2022.woah-1.15},
year = {2022},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("MultiHateClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 11000,
"number_of_characters": 502013,
"number_texts_intersect_with_train": 16,
"min_text_length": 1,
"average_text_length": 45.63754545454545,
"max_text_length": 135,
"unique_text": 10990,
"unique_labels": 2,
"labels": {
"0": {
"count": 7661
},
"1": {
"count": 3339
}
}
},
"train": {
"num_samples": 11000,
"number_of_characters": 505993,
"number_texts_intersect_with_train": null,
"min_text_length": 4,
"average_text_length": 45.99936363636364,
"max_text_length": 131,
"unique_text": 10993,
"unique_labels": 2,
"labels": {
"0": {
"count": 7659
},
"1": {
"count": 3341
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
Synthyra/StringDBSeqsv12 | Synthyra | 2024-12-05T20:24:48Z | 32 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-05T05:40:36Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
splits:
- name: train
num_bytes: 23246112381
num_examples: 59309604
download_size: 22113807143
dataset_size: 23246112381
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
All the IDs and sequences in StringDB version 12
https://string-db.org/cgi/download
|
beyondgravityresearch/nlp_zh_dazhongdianping | beyondgravityresearch | 2025-01-16T03:15:38Z | 15 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-01-16T03:15:38Z | 0 | ---
license: apache-2.0
---
|
jiuyal2/so100_marker_new_camera_pose_new | jiuyal2 | 2025-06-09T05:49:52Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100"
] | [
"robotics"
] | 2025-06-09T05:16:09Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 23,
"total_frames": 6047,
"total_tasks": 1,
"total_videos": 46,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:23"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.so100": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.iphone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ayushsi42/reward-bench-mistral-7b-sft-beta-comprehensive | ayushsi42 | 2025-06-23T21:03:14Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T21:02:57Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: chosen_model
dtype: string
- name: rejected
dtype: string
- name: rejected_model
dtype: string
- name: subset
dtype: string
- name: id
dtype: int64
- name: comprehensive_chosen_log_likelihood
dtype: float64
- name: comprehensive_rejected_log_likelihood
dtype: float64
- name: comprehensive_chosen_perplexity
dtype: float64
- name: comprehensive_rejected_perplexity
dtype: float64
- name: comprehensive_chosen_yes_prob
dtype: float64
- name: comprehensive_rejected_yes_prob
dtype: float64
- name: comprehensive_chosen_binary_confidence
dtype: float64
- name: comprehensive_rejected_binary_confidence
dtype: float64
- name: comprehensive_chosen_log_odds
dtype: float64
- name: comprehensive_rejected_log_odds
dtype: float64
- name: comprehensive_chosen_stability
dtype: float64
- name: comprehensive_rejected_stability
dtype: float64
- name: comprehensive_chosen_math_ensemble
dtype: float64
- name: comprehensive_rejected_math_ensemble
dtype: float64
- name: comprehensive_chosen_confidence_ensemble
dtype: float64
- name: comprehensive_rejected_confidence_ensemble
dtype: float64
- name: comprehensive_chosen_stability_weighted
dtype: float64
- name: comprehensive_rejected_stability_weighted
dtype: float64
- name: comprehensive_chosen_comprehensive
dtype: float64
- name: comprehensive_rejected_comprehensive
dtype: float64
splits:
- name: mt_bench_med
num_bytes: 133896
num_examples: 45
- name: math_prm
num_bytes: 942892
num_examples: 447
- name: mt_bench_hard
num_bytes: 101198
num_examples: 45
- name: refusals_dangerous
num_bytes: 290382
num_examples: 100
- name: llmbar_adver_manual
num_bytes: 84858
num_examples: 46
- name: alpacaeval_length
num_bytes: 3197147
num_examples: 805
- name: alpacaeval_hard
num_bytes: 1716529
num_examples: 805
- name: hep_rust
num_bytes: 198394
num_examples: 164
- name: hep_python
num_bytes: 165585
num_examples: 164
- name: hep_cpp
num_bytes: 192449
num_examples: 164
- name: llmbar_adver_GPTOut
num_bytes: 38924
num_examples: 47
- name: llmbar_natural
num_bytes: 104490
num_examples: 100
- name: xstest_should_refuse
num_bytes: 260956
num_examples: 154
- name: hep_java
num_bytes: 225956
num_examples: 164
- name: mt_bench_easy
num_bytes: 98377
num_examples: 28
- name: alpacaeval_easy
num_bytes: 2285248
num_examples: 805
- name: refusals_offensive
num_bytes: 188598
num_examples: 100
- name: donotanswer
num_bytes: 312964
num_examples: 136
- name: xstest_should_respond
num_bytes: 357610
num_examples: 250
- name: llmbar_adver_GPTInst
num_bytes: 223831
num_examples: 92
- name: llmbar_adver_neighbor
num_bytes: 150322
num_examples: 134
- name: hep_go
num_bytes: 202997
num_examples: 164
- name: hep_js
num_bytes: 183120
num_examples: 164
download_size: 6979577
dataset_size: 11656723
configs:
- config_name: default
data_files:
- split: mt_bench_med
path: data/mt_bench_med-*
- split: math_prm
path: data/math_prm-*
- split: mt_bench_hard
path: data/mt_bench_hard-*
- split: refusals_dangerous
path: data/refusals_dangerous-*
- split: llmbar_adver_manual
path: data/llmbar_adver_manual-*
- split: alpacaeval_length
path: data/alpacaeval_length-*
- split: alpacaeval_hard
path: data/alpacaeval_hard-*
- split: hep_rust
path: data/hep_rust-*
- split: hep_python
path: data/hep_python-*
- split: hep_cpp
path: data/hep_cpp-*
- split: llmbar_adver_GPTOut
path: data/llmbar_adver_GPTOut-*
- split: llmbar_natural
path: data/llmbar_natural-*
- split: xstest_should_refuse
path: data/xstest_should_refuse-*
- split: hep_java
path: data/hep_java-*
- split: mt_bench_easy
path: data/mt_bench_easy-*
- split: alpacaeval_easy
path: data/alpacaeval_easy-*
- split: refusals_offensive
path: data/refusals_offensive-*
- split: donotanswer
path: data/donotanswer-*
- split: xstest_should_respond
path: data/xstest_should_respond-*
- split: llmbar_adver_GPTInst
path: data/llmbar_adver_GPTInst-*
- split: llmbar_adver_neighbor
path: data/llmbar_adver_neighbor-*
- split: hep_go
path: data/hep_go-*
- split: hep_js
path: data/hep_js-*
---
|
orionweller/c4-text | orionweller | 2024-10-31T15:45:11Z | 29 | 0 | [
"size_categories:n<1K",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [] | 2024-10-31T15:20:56Z | 0 | ---
dataset_info:
- config_name: c4
features:
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 8596372
- config_name: msmarco
features:
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 8596372
configs:
- config_name: c4
data_files:
- split: train
path: c4/*.jsonl.tar.gz
- config_name: msmarco
data_files:
- split: train
path: msmarco/*.jsonl.tar.gz
---
This dataset contains two configurations: c4 and msmarco, both stored in gz format |
pclucas14/narrative_qa_rag_128_32overlap_12_30 | pclucas14 | 2024-10-17T20:04:52Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-17T20:04:50Z | 0 | ---
dataset_info:
features:
- name: text
sequence:
sequence: string
- name: questions
sequence: string
- name: answers
sequence:
sequence: string
- name: document_id
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 14693624
num_examples: 53
download_size: 5712122
dataset_size: 14693624
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
violetxi/NUMINA-V2-Clean-Blocks-9000_9500-0_500 | violetxi | 2024-11-30T18:55:51Z | 13 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-30T05:23:58Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: source
dtype: string
- name: is_correct
dtype: bool
- name: target_answer
dtype: string
- name: solution
dtype: string
- name: solution_steps
dtype: string
- name: attempts
dtype: string
- name: model_answer
dtype: string
splits:
- name: train
num_bytes: 2392376447
num_examples: 319344
download_size: 232342701
dataset_size: 2392376447
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cfpark00/toy-multistep-nn_20-na_20-nab_120-seed_0 | cfpark00 | 2025-04-07T08:50:58Z | 15 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-07T08:50:49Z | 0 | ---
dataset_info:
features:
- name: prompts
dtype: string
- name: completions
dtype: string
- name: num_maskeds
dtype: int64
- name: texts
dtype: string
splits:
- name: train
num_bytes: 28363436
num_examples: 262144
- name: test_rl
num_bytes: 28304972
num_examples: 262144
- name: test
num_bytes: 28343692
num_examples: 262144
download_size: 38309485
dataset_size: 85012100
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test_rl
path: data/test_rl-*
- split: test
path: data/test-*
---
|
uzair921/SKILLSPAN_LLM_CONTEXT_3541_25 | uzair921 | 2024-10-09T08:56:35Z | 20 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-08T16:17:04Z | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-Skill
'2': I-Skill
splits:
- name: train
num_bytes: 1058810
num_examples: 2075
- name: validation
num_bytes: 715196
num_examples: 1397
- name: test
num_bytes: 758463
num_examples: 1523
download_size: 0
dataset_size: 2532469
---
# Dataset Card for "SKILLSPAN_LLM_CONTEXT_3541_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HovorunB/example_dataset | HovorunB | 2025-06-14T19:06:21Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-14T19:06:18Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# example_dataset
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
DanOgh07/train_icl_sample | DanOgh07 | 2025-03-11T19:37:33Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-11T09:47:58Z | 0 | ---
dataset_info:
features:
- name: input
sequence: string
- name: target
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 104655883
num_examples: 1497
download_size: 1074376
dataset_size: 104655883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_8665861c-1fc5-4a62-9195-7634e450401e | argilla-internal-testing | 2024-10-21T16:26:40Z | 19 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-21T16:26:39Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
artianand/bbq_deberta_v3_large_race_custom_loss_custom_dataset_bbq | artianand | 2025-04-15T14:13:41Z | 8 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-12T12:36:06Z | 0 | ---
dataset_info:
features:
- name: category
dtype: string
- name: example_id
dtype: int64
- name: question_index
dtype: int64
- name: question_polarity
dtype: string
- name: context_condition
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: ans0
dtype: string
- name: ans1
dtype: string
- name: ans2
dtype: string
- name: answer_info
struct:
- name: ans0
sequence: string
- name: ans1
sequence: string
- name: ans2
sequence: string
- name: answer_label
dtype: int64
- name: target_label
dtype: int64
- name: additional_metadata
struct:
- name: corr_ans_aligns_race
dtype: string
- name: corr_ans_aligns_var2
dtype: string
- name: full_cond
dtype: string
- name: known_stereotyped_groups
dtype: string
- name: known_stereotyped_race
sequence: string
- name: known_stereotyped_var2
dtype: string
- name: label_type
dtype: string
- name: relevant_social_values
dtype: string
- name: source
dtype: string
- name: stereotyped_groups
sequence: string
- name: subcategory
dtype: string
- name: version
dtype: string
- name: predictions
dtype: int64
splits:
- name: train
num_bytes: 46744528
num_examples: 58492
download_size: 2560656
dataset_size: 46744528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
infinite-dataset-hub/ElClasicoLiveSentiment | infinite-dataset-hub | 2025-01-09T16:03:52Z | 12 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2025-01-09T16:03:50Z | 0 | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# ElClasicoLiveSentiment
tags: Classification, Spanish, Football Comments
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'ElClasicoLiveSentiment' dataset contains real-time comments from live broadcasts of the El Clasico football matches. Each comment has been translated into Spanish and analyzed for sentiment. The dataset is structured to facilitate sentiment classification, focusing on expressions related to football events like passes, goals, and sprints. Comments have been anonymized and tagged with either positive, neutral, or negative sentiment labels. The CSV file contains a selection of comments along with their respective sentiment labels, formatted for easy consumption by machine learning practitioners interested in analyzing the emotional tone of live football commentary.
**CSV Content Preview:**
```
CommentID,CommentText,Label
1,"¡Esto es una asombrosa jugada! El pase es inimaginable.",Positive
2,"Estoy muy confundido con la situación actual en el campo.",Neutral
3,"¡El gol fue abominable, solo noche sin fútbol!",Negative
4,"Parece que el equipo X tiene una increíble sensibilidad en los pases.",Positive
5,"¿Cuál fue el movimiento que llevó al segundo gol? Gracioso!",Positive
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'dataset of football game live comments in spanish for el clasico, positive/neutral/negative sentiment, quoted csv, comments on passes, goals, sprints etc.':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=dataset+of+football+game+live+comments+in+spanish+for+el+clasico,+positive/neutral/negative+sentiment,+quoted+csv,+comments+on+passes,+goals,+sprints+etc.&dataset=ElClasicoLiveSentiment&tags=Classification,+Spanish,+Football+Comments
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
MexIvanov/RAG-v1-ruen | MexIvanov | 2024-11-11T07:32:40Z | 82 | 1 | [
"language:en",
"language:ru",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"rag",
"synthetic",
"code"
] | [] | 2024-11-11T07:21:12Z | 0 | ---
license: apache-2.0
language:
- en
- ru
tags:
- rag
- synthetic
- code
---
A version of the glaiveai/RAG-v1 dataset extended with machine translation to Russian language for multilingual retrieval-augmented generation tasks.
Released under the same license as the original dataset, provided as is with research intent (but not limited), use/read at your own risk. |
ahmedheakl/plotly_pie_chart | ahmedheakl | 2025-02-03T18:00:11Z | 22 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"datadreamer",
"datadreamer-0.35.0",
"synthetic",
"gpt-4o"
] | [] | 2025-02-03T17:59:32Z | 0 | ---
dataset_info:
features:
- name: metadata
dtype: string
- name: topic
dtype: string
- name: data
dtype: string
- name: code
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 312700.0
num_examples: 4
download_size: 305028
dataset_size: 312700.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
library_name: datadreamer
size_categories:
- n<1K
tags:
- datadreamer
- datadreamer-0.35.0
- synthetic
- gpt-4o
- gpt-4o
- gpt-4o
---
# Dataset Card
[Add more information here](https://huggingface.co/datasets/templates/dataset-card-example)
---
This dataset was produced with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card can be found [here](datadreamer.json). |
aklywtx/corpus_vi_sop | aklywtx | 2024-12-07T14:25:34Z | 14 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-07T14:20:20Z | 0 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2660889947
num_examples: 8222483
download_size: 1119754151
dataset_size: 2660889947
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
underctrl/handcamera_single_orange | underctrl | 2024-12-10T06:09:31Z | 84 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2024-12-10T03:36:32Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "unknown",
"total_episodes": 80,
"total_frames": 20566,
"total_tasks": 1,
"total_videos": 240,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:80"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5"
]
}
},
"observation.images.android": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.webcam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.handcam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 30.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mlabonne/orca-agentinstruct-1M-v1-cleaned | mlabonne | 2025-01-25T16:01:27Z | 205 | 59 | [
"task_categories:question-answering",
"language:en",
"license:cdla-permissive-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2024-11-17T18:09:55Z | 0 | ---
language:
- en
license: cdla-permissive-2.0
size_categories:
- 1M<n<10M
task_categories:
- question-answering
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 4638101451
num_examples: 1046410
download_size: 2178041194
dataset_size: 4638101451
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# 🐋 Orca-AgentInstruct-1M-v1-cleaned
This is a cleaned version of the [microsoft/orca-agentinstruct-1M-v1](https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1) dataset released by Microsoft.
> orca-agentinstruct-1M-v1 is a fully synthetic dataset using only raw text publicly available on the web as seed data. It is a subset of the full AgentInstruct dataset (~25M samples) that created Orca-3-Mistral. Compared to Mistral 7B Instruct, the authors claim 40% improvement on AGIEval, 19% improvement on MMLU, 54% improvement on GSM8K, 38% improvement on BBH and 45% improvement on AlpacaEval.
Here's what I changed:
1. Splits are unified into one, with a new "split" column
2. Strings were converted into lists of dicts to ensure compatibility with most frameworks
3. Empty system prompts were removed so you don't get weird errors
Data categories in the dataset:
- creative_content
- text_modification
- struct2text_flow
- rc
- rag
- text_extraction
- mcq
- follow_up
- analytical_reasoning
- fermi
- fs_cot_flow
- code_
- brain_teaser
- text_classification
- open_domain_q |
ketchup123/TeleEval_ShareGPT | ketchup123 | 2025-03-18T17:00:49Z | 15 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-18T17:00:43Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 260048447
num_examples: 600000
download_size: 99217737
dataset_size: 260048447
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/openthoughts2_science_1k | mlfoundations-dev | 2025-05-06T05:34:31Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T05:34:30Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: _domain
dtype: string
- name: system
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: question
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: extracted_instruction
dtype: string
splits:
- name: train
num_bytes: 12487276.660572821
num_examples: 1000
download_size: 5577461
dataset_size: 12487276.660572821
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Procit004/sentiment | Procit004 | 2025-03-24T06:59:13Z | 7 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-24T06:59:06Z | 0 | ---
dataset_info:
features:
- name: full_text
dtype: string
- name: text_translation
dtype: string
- name: created_at
dtype: string
- name: screen_name
dtype: string
- name: description
dtype: string
- name: desc_translation
dtype: string
- name: weekofyear
dtype: float64
- name: weekday
dtype: float64
- name: day
dtype: float64
- name: month
dtype: float64
- name: year
dtype: float64
- name: location
dtype: string
- name: point_info
dtype: string
- name: point
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: altitude
dtype: float64
- name: province
dtype: string
- name: hisco_standard
dtype: string
- name: hisco_code
sequence: 'null'
- name: industry
dtype: bool
- name: sentiment_pattern
dtype: float64
- name: subjective_pattern
dtype: float64
splits:
- name: train
num_bytes: 18601833
num_examples: 27234
download_size: 8091136
dataset_size: 18601833
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gm168/GDPR_norms_extraction | gm168 | 2024-10-09T03:00:41Z | 25 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-04T00:39:58Z | 0 | ---
dataset_info:
features:
- name: gdpr_article_number
dtype: string
- name: cross_reference
dtype: string
- name: gdpr_article
dtype: string
- name: norms
dtype: string
- name: input
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 546596.5034965035
num_examples: 100
- name: test
num_bytes: 235036.4965034965
num_examples: 43
download_size: 195392
dataset_size: 781633.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tenfoldpaper/pick-gummy4 | tenfoldpaper | 2025-06-14T16:38:00Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-14T16:37:54Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 4,
"total_frames": 1778,
"total_tasks": 1,
"total_videos": 8,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:4"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Self-GRIT/open-hermes-2.5-sft-active-retrieval-instruct-linq-wikiv2-prefix-suffix | Self-GRIT | 2024-10-02T08:20:13Z | 19 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-02T08:19:50Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 827209361
num_examples: 300430
download_size: 428840782
dataset_size: 827209361
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/gold-37M-1.5M-gens-4-30 | kothasuhas | 2025-05-01T11:47:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-01T11:44:17Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3575804917
num_examples: 1500000
- name: validation
num_bytes: 2451760
num_examples: 1000
download_size: 2098681302
dataset_size: 3578256677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
smanni/train_so100_pick_place_double_purple_object | smanni | 2025-05-28T12:42:41Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-28T12:42:26Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 29850,
"total_tasks": 1,
"total_videos": 50,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.intel_realsense": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
c-ho/2025-05-16_processed_dev_df_with_linsearch_num | c-ho | 2025-05-16T17:58:28Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-16T14:48:34Z | 0 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: text
dtype: string
- name: '@id'
dtype: string
- name: sameAs
dtype: string
- name: linsearch
dtype: string
- name: contributor
dtype: string
- name: identifier
dtype: string
- name: publisher
dtype: string
- name: subject
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: issued
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: dcterms:subject
dtype: string
- name: all@id
dtype: string
- name: all_sameAs
dtype: string
- name: linsearch_num
dtype: int64
splits:
- name: train
num_bytes: 166022611
num_examples: 19359
download_size: 0
dataset_size: 166022611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "2025-05-16_processed_dev_df_with_linsearch_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tmpmodelsave/llama3_70bcorr_type2_selftype1_beta01_500tmp10 | tmpmodelsave | 2025-01-20T15:56:53Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-20T15:56:31Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 16363291
num_examples: 5000
download_size: 5962535
dataset_size: 16363291
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AyaKhaled/test_csv_data3 | AyaKhaled | 2025-03-30T10:24:46Z | 52 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-30T10:15:35Z | 0 | ---
dataset_info:
features:
- name: image_path
dtype: string
- name: caption
dtype: string
- name: question
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 100028841.0
num_examples: 999
download_size: 99402588
dataset_size: 100028841.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wyh1128/US-1M | wyh1128 | 2025-05-23T09:29:35Z | 299 | 0 | [
"task_categories:image-feature-extraction",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"medical"
] | [
"image-feature-extraction"
] | 2025-05-15T07:49:55Z | 0 | ---
license: apache-2.0
task_categories:
- image-feature-extraction
language:
- en
tags:
- medical
size_categories:
- n>1T
---
# Dataset Card for Dataset Name
US-1M establishes a new benchmark in medical imaging research, comprising 1.28 million rigorously annotated ultrasound images with precise anatomical structure delineations and comprehensive
demographic metadata
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Uses
## Dataset Structure

|
tlpss/roboflow-garlic | tlpss | 2024-12-16T19:27:47Z | 26 | 0 | [
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"keypoint-detection"
] | [] | 2024-12-16T19:20:16Z | 0 | ---
size_categories:
- n<1K
tags:
- keypoint-detection
---
# Garlic Keypoint Detection dataset
This dataset contains 1000 images of a single garlic clove in a pressumably industrial setting. The annotations are coco-formatted and are composed of a bounding box and 2 keypoints: head and tail.
The dataset was taken from https://universe.roboflow.com/gesture-recognition-dsn2n/garlic_keypoint/dataset/1. Refer to the original repo for licensing questions.
The annotations json files were slightly modified (formatting, image base directory,..) w.r.t. the original files. |
ZixuanKe/flare_finqa_sup_sample_from_policy_v1.1_dpo_train_chunk_18 | ZixuanKe | 2024-11-23T21:36:43Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-23T21:36:41Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5389539
num_examples: 1096
download_size: 571451
dataset_size: 5389539
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_livecodebench_qwen2.5-7b_t1.0_n8_tests_livecodebench_qwen3-4b-easy-unique_t0.0_n1 | test-gen | 2025-05-21T17:19:15Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T17:19:12Z | 0 | ---
dataset_info:
features:
- name: question_title
dtype: string
- name: question_content
dtype: string
- name: question_id
dtype: string
- name: contest_id
dtype: string
- name: test_id
dtype: int64
- name: contest_date
dtype: timestamp[us]
- name: starter_code
dtype: string
- name: function_name
dtype: string
- name: difficulty
dtype: string
- name: test
dtype: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 2199516
num_examples: 182
download_size: 874946
dataset_size: 2199516
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
neelabh17/new_news_exploded_prompt_n_5_d_perc_60_num_gen_10_Qwen2.5-7B-Instruct | neelabh17 | 2025-05-15T15:12:35Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-15T15:12:34Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: topic
dtype: string
- name: news
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: option
sequence: string
- name: prompt
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
splits:
- name: train
num_bytes: 2770900
num_examples: 375
download_size: 1027157
dataset_size: 2770900
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HumanoidTeam/demo-maltesers | HumanoidTeam | 2024-12-19T22:17:10Z | 28 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2024-12-19T22:15:33Z | 0 | ---
task_categories:
- robotics
tags:
- LeRobot
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
mamung/reddit_dataset_192 | mamung | 2025-02-28T07:58:02Z | 9 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-02-26T20:54:42Z | 0 | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** mamung/reddit_dataset_192
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5DtfQfETXqrW9JAEBgqZcidaHaxogXgWSMrQ8Qotzth9GCsr
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{mamung2025datauniversereddit_dataset_192,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={mamung},
year={2025},
url={https://huggingface.co/datasets/mamung/reddit_dataset_192},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 17425
- **Date Range:** 2024-11-08T00:00:00Z to 2025-02-28T00:00:00Z
- **Last Updated:** 2025-02-28T07:58:01Z
### Data Distribution
- Posts: 24.25%
- Comments: 75.75%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/teenagers | 2361 | 13.55% |
| 2 | r/NoStupidQuestions | 2295 | 13.17% |
| 3 | r/Advice | 2205 | 12.65% |
| 4 | r/Monopoly_GO | 1935 | 11.10% |
| 5 | r/dirtyr4r | 1709 | 9.81% |
| 6 | r/AITAH | 1629 | 9.35% |
| 7 | r/videogames | 1365 | 7.83% |
| 8 | r/AmIOverreacting | 1320 | 7.58% |
| 9 | r/PcBuild | 1296 | 7.44% |
| 10 | r/moviecritic | 1033 | 5.93% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-02-26T20:54:44Z | 198 | 198 |
| 2025-02-27T14:34:13Z | 6080 | 6278 |
| 2025-02-28T07:58:01Z | 11147 | 17425 |
|
fatlonder/i24 | fatlonder | 2024-11-22T03:02:49Z | 49 | 0 | [
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-21T23:08:22Z | 0 | ---
license: other
license_name: i24
license_link: LICENSE
---
Data use agreement
By acquiring and using the I-24 MOTION data, you agree to the following data use agreement (updated 19 September 2023).
You are free to use the data in academic and commercial work.
The dataset contains anonymous trajectories. Any activities to re-identify individuals in the dataset or activities that may cause harm to individuals in the dataset are prohibited.
When you use I-24 MOTION data in published academic work, you are required to include the following citation contents. This allows us to aggregate statistics on the data use in publications:
Gloudemans, D., Wang, Y., Ji, J., Zachar, G., Barbour, W., Hall, E., Cebelak, M., Smith, L. and Work, D.B., 2023. I-24 MOTION: An instrument for freeway traffic science. Transportation Research Part C: Emerging Technologies, 155, p.104311.
The bibtex version of the reference is:
@article{gloudemans202324,
title={I-24 MOTION: An instrument for freeway traffic science},
author={Gloudemans, Derek and Wang, Yanbing and Ji, Junyi and Zachar, Gergely and Barbour, William and Hall, Eric and Cebelak, Meredith and Smith, Lee and Work, Daniel B},
journal={Transportation Research Part C: Emerging Technologies},
volume={155},
pages={104311},
year={2023},
publisher={Elsevier}
}
You are free to create and share derivative products as long as you maintain the terms above.
The data is provided “As is.” We make no other warranties, express or implied, and hereby disclaim all implied warranties, including any warranty of merchantability and warranty of fitness for a particular purpose.
|
WPRM/preference_data_llama_factory_corrected_format_text_only | WPRM | 2025-04-21T07:18:25Z | 27 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-21T07:12:16Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: chosen
struct:
- name: from
dtype: string
- name: value
dtype: string
- name: rejected
struct:
- name: from
dtype: string
- name: value
dtype: string
- name: images
sequence: image
- name: task_id
dtype: int32
- name: step_id
dtype: int32
- name: source_name
dtype: string
splits:
- name: train
num_bytes: 9760903394.289
num_examples: 30921
download_size: 2957818091
dataset_size: 9760903394.289
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kornwtp/sts17-crosslingual-mya-sts | kornwtp | 2025-03-09T15:06:19Z | 21 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-03T06:06:00Z | 0 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 50241.824
num_examples: 248
download_size: 22364
dataset_size: 50241.824
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
BIT-MJY/test_tube_pick | BIT-MJY | 2025-06-16T12:55:45Z | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:32:22Z | 0 | ---
dataset_info:
features:
- name: image
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 332501.80144879816
num_examples: 2733
- name: val
num_bytes: 18492.599275600922
num_examples: 152
- name: test
num_bytes: 18492.599275600922
num_examples: 152
download_size: 31623
dataset_size: 369487.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
Yiheyihe/galaxea-r1-shelf-debug | Yiheyihe | 2025-02-26T21:04:35Z | 37 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-02-14T12:11:17Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": null,
"total_episodes": 1,
"total_frames": 508,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
21
]
},
"action": {
"dtype": "float32",
"shape": [
21
]
},
"observation.images.head": {
"dtype": "video",
"shape": [
3,
94,
168
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.fps": 30.0,
"video.height": 94,
"video.width": 168,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.left_wrist": {
"dtype": "video",
"shape": [
3,
94,
168
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.fps": 30.0,
"video.height": 94,
"video.width": 168,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.right_wrist": {
"dtype": "video",
"shape": [
3,
94,
168
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.fps": 30.0,
"video.height": 94,
"video.width": 168,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
binyoonan/common | binyoonan | 2025-04-13T02:14:07Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-13T02:09:36Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 11330258
num_examples: 25166
download_size: 4733278
dataset_size: 11330258
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MayAlsofyani/balanced_manybugs_few1 | MayAlsofyani | 2025-01-28T20:02:37Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-28T20:02:36Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 172253
num_examples: 42
download_size: 76897
dataset_size: 172253
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vGassen/EU-Parliament-Minutes-Dutch | vGassen | 2025-06-17T14:14:27Z | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-17T13:51:31Z | 0 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: URL
dtype: string
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 7219277
num_examples: 324
download_size: 2389564
dataset_size: 7219277
---
Dutch-European-Parliament-Minutes
This dataset contains cleaned Dutch minutes from the plenary sessions of the European Parliament.
The texts are scraped directly from the official website using scraper.py in this repository.
Dataset Structure
Each record includes:
URL – the original source link to the XML file.
text – the cleaned minutes text with procedural content removed.
source – a fixed string identifying the data as “European Parliament Minutes”.
Example:
{
"URL": "https://www.europarl.europa.eu/doceo/document/PV-10-2025-06-16_NL.xml",
"text": "De vergadering wordt om 14.00 uur geopend. ...",
"source": "European Parliament Minutes"
}
How to Use
Load the dataset directly from the Hugging Face Hub:
from datasets import load_dataset
dataset = load_dataset("YOUR_USERNAME/Dutch-European-Parliament-Minutes", split="train")
You can then iterate over dataset or convert it to other formats supported by the datasets library.
Source and Collection
The scraper starts from a given “table of contents” page (e.g. PV‑10‑2025‑06‑16‑TOC_NL.html) and follows the “Vorige” link to work backwards through parliamentary terms, currently stopping at term 6. Each corresponding XML minutes file is downloaded, cleaned, and converted into the dataset format.
The script excludes the “PRESENTIELIJST” section and other procedural notes so that each record focuses on the actual debate text.
License
The compiled dataset is released under CC-BY‑4.0.
According to the European Parliament website, older minutes use an equivalent open license (e.g. CC-BY‑SA or another permissive license). Please verify individual documents if you have specific reuse requirements.
When using this dataset, please credit “European Parliament” and, if applicable, link back to the Hugging Face dataset page. |
Alok2304/Indian_Property_Law | Alok2304 | 2025-03-18T17:41:22Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-18T17:41:17Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 108550
num_examples: 500
download_size: 7341
dataset_size: 108550
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BarryFutureman/vpt_data_8xx_shard0110 | BarryFutureman | 2025-06-11T01:33:25Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-11T01:31:27Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 10,
"total_frames": 54233,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.image": {
"dtype": "image",
"shape": [
3,
360,
640
],
"names": [
"channel",
"height",
"width"
]
},
"action": {
"dtype": "string",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
CodeSoft/BeatSaberLM-Dataset | CodeSoft | 2025-04-23T23:30:08Z | 16 | 0 | [
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"beatsaber"
] | [
"question-answering"
] | 2025-04-23T23:18:58Z | 0 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- beatsaber
pretty_name: BeatSaberLM
size_categories:
- n<1K
---
# BeatSaberLM
BeatSaberLM is a conversational QA dataset built for the virtual reality video game Beat Saber. It includes prompts and detailed completions derived from community experience, guides, and technical modding instructions. Data was auto-generated and manually filtered for quality, but some inaccuracies may still be present.
## Features
- Covers gameplay, modding, difficulty levels, saber types, and more
- Focused on PC and Quest platforms
- Structured as prompt-completion pairs for language model fine-tuning or inference
- Continuously expanding based on community input and updates to Beat Saber
## Usage
BeatSaberLM is ideal for training or evaluating models on:
- Beat Saber-specific virtual assistant tasks like modding, mapping, or gameplay
## Limitations
- Dataset was filtered manually, but some answers may still be outdated or partially incorrect
- Some answers may reflect opinions from the community or common practices rather than official sources
## Updates
This dataset will be randomly updated.
## Licensing
This dataset is released under the MIT License.
## How to Cite
If you use this dataset in your research or projects, please cite it as follows:
```bibtex
@misc{codesoft2025beatsaberlm,
author = "CodeSoft",
title = "BeatSaberLM Dataset",
year = 2025,
note = "https://huggingface.co/datasets/CodeSoft/BeatSaberLM-Dataset"
}
```
---
For contributions or suggestions, reach out to me in the community tab. |
GitBag/Qwen2.5-7B-tldr-val | GitBag | 2025-04-10T12:15:30Z | 8 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T06:43:12Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: response_0
dtype: string
- name: response_0_rouge_1
dtype: float64
- name: response_0_rouge_2
dtype: float64
- name: response_0_rouge_L
dtype: float64
- name: response_0_Qwen2.5-0.5B_logprobs
dtype: float64
- name: response_0_Qwen2.5-1.5B_logprobs
dtype: float64
- name: response_0_Qwen2.5-3B_logprobs
dtype: float64
- name: response_0_Qwen2.5-7B_logprobs
dtype: float64
- name: response_0_Qwen2.5-0.5B-Instruct_logprobs
dtype: float64
- name: response_0_Qwen2.5-1.5B-Instruct_logprobs
dtype: float64
- name: response_0_Qwen2.5-3B-Instruct_logprobs
dtype: float64
- name: response_0_Qwen2.5-7B-Instruct_logprobs
dtype: float64
- name: response_1
dtype: string
- name: response_1_rouge_1
dtype: float64
- name: response_1_rouge_2
dtype: float64
- name: response_1_rouge_L
dtype: float64
- name: response_1_Qwen2.5-0.5B_logprobs
dtype: float64
- name: response_1_Qwen2.5-1.5B_logprobs
dtype: float64
- name: response_1_Qwen2.5-3B_logprobs
dtype: float64
- name: response_1_Qwen2.5-7B_logprobs
dtype: float64
- name: response_1_Qwen2.5-0.5B-Instruct_logprobs
dtype: float64
- name: response_1_Qwen2.5-1.5B-Instruct_logprobs
dtype: float64
- name: response_1_Qwen2.5-3B-Instruct_logprobs
dtype: float64
- name: response_1_Qwen2.5-7B-Instruct_logprobs
dtype: float64
- name: response_2
dtype: string
- name: response_2_rouge_1
dtype: float64
- name: response_2_rouge_2
dtype: float64
- name: response_2_rouge_L
dtype: float64
- name: response_2_Qwen2.5-0.5B_logprobs
dtype: float64
- name: response_2_Qwen2.5-1.5B_logprobs
dtype: float64
- name: response_2_Qwen2.5-3B_logprobs
dtype: float64
- name: response_2_Qwen2.5-7B_logprobs
dtype: float64
- name: response_2_Qwen2.5-0.5B-Instruct_logprobs
dtype: float64
- name: response_2_Qwen2.5-1.5B-Instruct_logprobs
dtype: float64
- name: response_2_Qwen2.5-3B-Instruct_logprobs
dtype: float64
- name: response_2_Qwen2.5-7B-Instruct_logprobs
dtype: float64
- name: response_3
dtype: string
- name: response_3_rouge_1
dtype: float64
- name: response_3_rouge_2
dtype: float64
- name: response_3_rouge_L
dtype: float64
- name: response_3_Qwen2.5-0.5B_logprobs
dtype: float64
- name: response_3_Qwen2.5-1.5B_logprobs
dtype: float64
- name: response_3_Qwen2.5-3B_logprobs
dtype: float64
- name: response_3_Qwen2.5-7B_logprobs
dtype: float64
- name: response_3_Qwen2.5-0.5B-Instruct_logprobs
dtype: float64
- name: response_3_Qwen2.5-1.5B-Instruct_logprobs
dtype: float64
- name: response_3_Qwen2.5-3B-Instruct_logprobs
dtype: float64
- name: response_3_Qwen2.5-7B-Instruct_logprobs
dtype: float64
- name: response_4
dtype: string
- name: response_4_rouge_1
dtype: float64
- name: response_4_rouge_2
dtype: float64
- name: response_4_rouge_L
dtype: float64
- name: response_4_Qwen2.5-0.5B_logprobs
dtype: float64
- name: response_4_Qwen2.5-1.5B_logprobs
dtype: float64
- name: response_4_Qwen2.5-3B_logprobs
dtype: float64
- name: response_4_Qwen2.5-7B_logprobs
dtype: float64
- name: response_4_Qwen2.5-0.5B-Instruct_logprobs
dtype: float64
- name: response_4_Qwen2.5-1.5B-Instruct_logprobs
dtype: float64
- name: response_4_Qwen2.5-3B-Instruct_logprobs
dtype: float64
- name: response_4_Qwen2.5-7B-Instruct_logprobs
dtype: float64
splits:
- name: validation
num_bytes: 98096668
num_examples: 6447
download_size: 33846237
dataset_size: 98096668
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
ZHIYII/successful_finite_inf_1000_all_parts | ZHIYII | 2025-04-08T07:40:19Z | 7 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-08T06:14:07Z | 0 | ---
dataset_info:
- config_name: default
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10468815648
num_examples: 1793234
download_size: 4531851197
dataset_size: 10468815648
- config_name: test
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 531454512
num_examples: 92978
download_size: 99427549
dataset_size: 531454512
- config_name: train
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10468815648
num_examples: 1793234
download_size: 4531851197
dataset_size: 10468815648
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: test
data_files:
- split: train
path: test/train-*
- config_name: train
data_files:
- split: train
path: train/train-*
---
|
sylvan54/Bean_Captioning_Dataset_1 | sylvan54 | 2025-01-21T10:50:34Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-21T10:50:22Z | 0 | ---
dataset_info:
features:
- name: Image
sequence:
sequence:
sequence: uint8
- name: plant_description
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 271556338
num_examples: 771
download_size: 117703764
dataset_size: 271556338
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
oakwood/efe_br-35 | oakwood | 2024-10-13T02:40:22Z | 15 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2024-10-13T02:40:14Z | 0 | ---
task_categories:
- robotics
tags:
- LeRobot
---
This dataset was created using [🤗 LeRobot](https://github.com/huggingface/lerobot).
|
umang122104/RM-Bench-code-Mistral-7B-Instruct-v0.3-yesno | umang122104 | 2025-06-23T21:48:12Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T21:48:10Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: subset
dtype: string
- name: language
dtype: string
- name: task_id
dtype: string
- name: chosen_1
dtype: string
- name: chosen_2
dtype: string
- name: chosen_3
dtype: string
- name: rejected_1
dtype: string
- name: rejected_2
dtype: string
- name: rejected_3
dtype: string
- name: chosen_1_yes_prob
dtype: float64
- name: chosen_1_no_prob
dtype: float64
- name: rejected_1_yes_prob
dtype: float64
- name: rejected_1_no_prob
dtype: float64
- name: chosen_2_yes_prob
dtype: float64
- name: chosen_2_no_prob
dtype: float64
- name: rejected_2_yes_prob
dtype: float64
- name: rejected_2_no_prob
dtype: float64
- name: chosen_3_yes_prob
dtype: float64
- name: chosen_3_no_prob
dtype: float64
- name: rejected_3_yes_prob
dtype: float64
- name: rejected_3_no_prob
dtype: float64
splits:
- name: train
num_bytes: 2576722
num_examples: 228
download_size: 1100364
dataset_size: 2576722
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nimashoghi/matbench_mp_gap_fold3 | nimashoghi | 2025-03-12T20:52:23Z | 65 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-12T20:52:19Z | 0 | ---
dataset_info:
features:
- name: orig_idx
dtype: int64
- name: positions
sequence:
sequence: float32
- name: atomic_numbers
sequence: int64
- name: natoms
dtype: int64
- name: tags
sequence: int64
- name: fixed
sequence: bool
- name: cell
sequence:
sequence:
sequence: float32
- name: pbc
sequence:
sequence: bool
- name: y
dtype: float64
splits:
- name: train
num_bytes: 81772005
num_examples: 76401
- name: val
num_bytes: 8912098
num_examples: 8490
- name: test
num_bytes: 22709105
num_examples: 21222
download_size: 57276627
dataset_size: 113393208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
yasimed/Cifar100_test | yasimed | 2025-02-19T02:32:01Z | 38 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T02:31:58Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': crab
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
splits:
- name: train
num_bytes: 22484261.0
num_examples: 10000
download_size: 22551810
dataset_size: 22484261.0
---
# Dataset Card for "Cifar100_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.