datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
copycat-project/output_diversity_FLUX_guidance1.5 | copycat-project | 2024-11-11T13:46:22Z | 22 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-11T13:45:45Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: category
dtype: string
- name: ours
list: image
- name: baseline
list: image
splits:
- name: train
num_bytes: 786287182.0
num_examples: 120
download_size: 786325395
dataset_size: 786287182.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Triangle104/TheDrummer-AmoralQA-v2 | Triangle104 | 2024-12-01T09:18:45Z | 152 | 0 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering",
"text-generation"
] | 2024-12-01T09:18:45Z | 0 | ---
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
## Dear User: If you're going to use this, please let me know your experience with it.
# AmoralQA v2
> Readers, like writers, are essentially amoral. Arm's length will never do. We want to get closer.
## Description
AmoralQA is a different approach to ToxicQA where instead of answering it enthusiastically with lots of evil slop such as "thrilling", we force the AI to answer toxic questions in a neutral manner.
This should help reduce lobotomy, evil slop, disalignment, and overcooking.

 |
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_2f0e5c4a-dc0b-4770-93a5-8ace453dd654 | argilla-internal-testing | 2024-11-28T16:20:29Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-28T16:20:29Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pariajm/sharif_emotional_speech_dataset | pariajm | 2022-10-24T16:49:19Z | 25 | 2 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:radio-plays",
"language:fa",
"license:apache-2.0",
"size_categories:1K<n<10K",
"region:us"
] | [
"automatic-speech-recognition"
] | 2022-03-02T23:29:22Z | 0 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Sharif Emotional Speech Dataset (ShEMO)
size_categories:
- 1K<n<10K
source_datasets:
- radio-plays
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
---
## Sharif Emotional Speech Dataset (ShEMO)
## Dataset Summary
The dataset includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online Persian radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including <i>anger</i>, <i>fear</i>, <i>happiness</i>, <i>sadness</i> and <i>surprise</i>, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure,
the inter-annotator agreement is 64% which is interpreted as "substantial agreement".
## Languages
Persian (fa)
## Overview of ShEMO
Feature | Status
------------- | ----------
**license** | apache-2.0
**language** | Persian (fa)
**modality** | Speech
**duration** | 3 hours and 25 minutes
**#utterances** | 3000
**#speakers** | 87 (31 females, 56 males)
**#emotions** | 5 basic emotions (anger, fear, happiness, sadness and surprise) and neutral state
**orthographic transcripts** | Available
**phonetic transcripts** | Available
## Data Instances
Here is a sample of data instances:
```json
"F21N37": {
"speaker_id": "F21",
"gender": "female",
"emotion": "neutral",
"transcript": "مگه من به تو نگفته بودم که باید راجع به دورانت سکوت کنی؟",
"ipa": "mӕge mæn be to nægofte budӕm ke bɑyæd rɑdʒeʔ be dorɑnt sokut koni"
}
```
## Citation
If you use this dataset, please cite the following paper:
~~~~
@Article{MohamadNezami2019,
author = {Mohamad Nezami, Omid and Jamshid Lou, Paria and Karami, Mansoureh},
title = {ShEMO: a large-scale validated database for Persian speech emotion detection},
journal = {Language Resources and Evaluation},
year = {2019},
volume = {53},
number = {1},
pages = {1--16},
issn = {1574-0218},
doi = {10.1007/s10579-018-9427-x},
url = {https://doi.org/10.1007/s10579-018-9427-x}
}
~~~~
## Download Dataset
To download the dataset, please check the [ShEMO repo](https://github.com/pariajm/sharif-emotional-speech-database)! |
JJuny/llama2_SYC_1120_with_testPDF_train | JJuny | 2024-11-20T00:34:39Z | 15 | 0 | [
"region:us"
] | [] | 2024-11-20T00:33:55Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 533450
num_examples: 71
download_size: 145605
dataset_size: 533450
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JJYDXFS/LifeTrajectory_5M | JJYDXFS | 2025-06-15T13:10:30Z | 32 | 0 | [
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T15:22:29Z | 0 | ---
license: mit
language:
- en
---
# Dataset Details
This dataset contains **over 5 million spatio-temporal life trajectory triplets** automatically extracted from 1.9 million biography pages on English Wikipedia.
This is a release from our paper [Paths of A Million People: Extracting Life Trajectories from Wikipedia](https://ojs.aaai.org/index.php/ICWSM/article/view/35930), so please cite it if using this dataset.
# Citation
```
@inproceedings{zhang2025paths,
title={Paths of A Million People: Extracting Life Trajectories from Wikipedia},
author={Zhang, Ying and Li, Xiaofeng and Liu, Zhaoyang and Zhang, Haipeng},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={19},
pages={2226--2240},
year={2025}
}
```
|
test-gen/num1_code_humaneval_qwen2.5-7b_t1.0_n8_tests_humaneval_qwen3-4b_t0.6_n1_think | test-gen | 2025-05-21T21:13:53Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T21:13:52Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 1763593
num_examples: 164
download_size: 604832
dataset_size: 1763593
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Kamyar-zeinalipour/test_data | Kamyar-zeinalipour | 2025-03-20T15:59:14Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-20T15:59:09Z | 0 | ---
dataset_info:
features:
- name: cycle
dtype: int64
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: raw_generated_text
dtype: string
- name: extracted_output
dtype: string
- name: applied_template_text
dtype: string
- name: rouge_scores_text
dtype: string
- name: rouge_scores_triple
dtype: string
- name: rouge_l_fmeasure_text
dtype: string
- name: rouge_l_fmeasure_triple
dtype: float64
- name: emb_similarity_text
dtype: string
- name: emb_similarity_triple
dtype: float64
- name: combined_similarity_triple
dtype: float64
- name: combined_similarity_text
dtype: string
- name: combined_similarity_triple_diff
dtype: float64
- name: input_text
dtype: string
- name: initial_text
dtype: string
- name: source_file
dtype: string
- name: type
dtype: string
- name: user_content
dtype: string
- name: assistant_output
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 3467216
num_examples: 480
- name: test
num_bytes: 835036
num_examples: 120
download_size: 1684329
dataset_size: 4302252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
raresense/SAKS_small_jewelry | raresense | 2025-06-18T22:05:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-18T22:03:54Z | 0 | ---
dataset_info:
features:
- name: target
dtype:
image:
decode: false
- name: ghost_image
dtype:
image:
decode: false
- name: mask
dtype:
image:
decode: false
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 72278377.864
num_examples: 1991
download_size: 54666457
dataset_size: 72278377.864
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Hastagaras/LMSYS-Openleecher-Filtered-120K | Hastagaras | 2025-04-24T02:49:29Z | 27 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-24T02:49:16Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 375796202.8730787
num_examples: 122045
download_size: 203442048
dataset_size: 375796202.8730787
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_qwen2.5-7b_t1.0_n8_tests_mbpp_qwen-3b-easy-unique_t0.0_n1 | test-gen | 2025-05-21T08:30:51Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-21T08:30:50Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 3819315
num_examples: 500
download_size: 1456615
dataset_size: 3819315
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
billmianz/easyr1 | billmianz | 2025-04-30T00:35:59Z | 95 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T16:10:03Z | 0 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: images
sequence: image
- name: problem
dtype: string
- name: data_source
dtype: string
splits:
- name: mathvista_testmini
num_bytes: 160701614.0
num_examples: 1000
- name: mathverse_testmini
num_bytes: 176517952.5
num_examples: 3940
- name: virl39k
num_bytes: 1136623177.8
num_examples: 38870
download_size: 4067402694
dataset_size: 1473842744.3
configs:
- config_name: default
data_files:
- split: mathvista_testmini
path: data/mathvista_testmini-*
- split: mathverse_testmini
path: data/mathverse_testmini-*
- split: virl39k
path: data/virl39k-*
---
|
zijian2022/eval_force3_1 | zijian2022 | 2025-04-15T16:27:35Z | 21 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-15T16:27:25Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 1449,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Rimi798/q_to_algo_new | Rimi798 | 2024-11-19T15:04:54Z | 16 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-19T15:04:29Z | 0 | ---
license: apache-2.0
---
|
klcsp/original-alpaca | klcsp | 2024-11-16T02:17:34Z | 14 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-14T13:29:38Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 20381660
num_examples: 52002
- name: sft_train
num_bytes: 41674225.626970634
num_examples: 51242
- name: sft_test
num_bytes: 421280.3730293663
num_examples: 518
- name: train_sft
num_bytes: 41222351.57434312
num_examples: 51242
- name: test_sft
num_bytes: 416712.4256568779
num_examples: 518
download_size: 106020185
dataset_size: 104116230.00000001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
- split: sft_train
path: data/sft_train-*
- split: sft_test
path: data/sft_test-*
---
|
islamham/swe_bench | islamham | 2025-06-15T21:44:43Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-15T21:44:41Z | 0 | ---
dataset_info:
features:
- name: instance_id
dtype: string
- name: repo
dtype: string
splits:
- name: train
num_bytes: 1409
num_examples: 53
download_size: 2273
dataset_size: 1409
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pinecone/msmarco-beir-constbert | pinecone | 2025-04-18T17:41:58Z | 451 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-18T16:33:06Z | 0 | ---
license: apache-2.0
---
|
YDTsai/test | YDTsai | 2025-02-05T08:43:58Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-05T08:36:42Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 581
num_examples: 14
- name: validation
num_bytes: 332
num_examples: 8
- name: test
num_bytes: 166
num_examples: 4
download_size: 3055
dataset_size: 1079
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
jchun/so100_cleaning_merge | jchun | 2025-05-25T23:41:06Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-25T23:38:02Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# so100_cleaning_merge_20250525_132814_154019_cleaned
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
HanningZhang/scalebio_qwen_math_20k_uw2e-6_alpha100_lambda1e-2 | HanningZhang | 2025-02-08T00:26:33Z | 51 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-08T00:26:31Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 35236306.083379105
num_examples: 21400
download_size: 18026742
dataset_size: 35236306.083379105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_1.0_num-company_2_dataset_0_for_gen_8 | HungVu2003 | 2025-04-10T12:46:13Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T12:46:11Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1569447
num_examples: 6250
download_size: 900558
dataset_size: 1569447
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
svjack/Sebastian_Michaelis_Videos_Captioned | svjack | 2025-04-22T00:21:17Z | 144 | 0 | [
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-04-22T00:06:15Z | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---



|
datasets-CNRS/Montluc | datasets-CNRS | 2025-03-29T21:40:07Z | 12 | 0 | [
"language:fra",
"license:cc-by-nc-sa-4.0",
"region:us"
] | [] | 2024-10-19T20:53:10Z | 0 | ---
language:
- fra
viewer: false
license: cc-by-nc-sa-4.0
---
> [!NOTE]
> Dataset origin: https://cocoon.huma-num.fr/exist/crdo/meta/cocoon-1fcd9238-96f8-455d-ac25-c3a49d89be2a
## Description
Dans les années 2007-2010 à Lyon, la prison Montluc a été sauvegardée alors qu'elle risquait d'être détruite. Dans des délais très courts elle a été transformée en Mémorial dédié à la Seconde Guerre mondiale. Celui-ci a été reconnu comme haut lieu de la mémoire nationale. Or la prison Montluc a une longue histoire. Elle a connu d'autres usages qui touchent aussi à la mémoire nationale, notamment la « période algérienne » 1958-1962, évoquée en 2015 en quatre lignes dans le Mémorial. Notre enquête menée entre 2012 et 2015 documente et met en discussion les choix d'histoires et de mémoires de ce Mémoria
## Citation
```
@misc{https://doi.org/10.34847/cocoon.1fcd9238-96f8-455d-ac25-c3a49d89be2a,
doi = {10.34847/COCOON.1FCD9238-96F8-455D-AC25-C3A49D89BE2A},
url = {https://cocoon.huma-num.fr/exist/crdo/meta/cocoon-1fcd9238-96f8-455d-ac25-c3a49d89be2a},
author = {{Têtu, Marie-Thérèse}},
language = {fr},
title = {Montluc, un lieu à mémoires multiples},
publisher = {Centre Max Weber; Centre national de la recherche scientifique},
year = {2020}
}
``` |
agentlans/drill | agentlans | 2025-04-16T11:08:29Z | 41 | 0 | [
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ifeval",
"formatting",
"structured-output",
"instruction-following",
"text-generation"
] | [
"text2text-generation"
] | 2025-04-16T10:40:04Z | 0 | ---
language:
- en
task_categories:
- text2text-generation
tags:
- ifeval
- formatting
- structured-output
- instruction-following
- text-generation
configs:
- config_name: train
data_files:
- path:
- train.jsonl.zst
split: train
default: true
- config_name: ifeval-like
data_files:
- path:
- ifeval-like.jsonl.zst
split: train
- config_name: Formax
data_files:
- path:
- Formax.jsonl.zst
split: train
- config_name: json-training
data_files:
- path:
- json-training.jsonl.zst
split: train
license: apache-2.0
---
# Drill
This dataset combines three instruction-following datasets:
- [argilla/ifeval-like-data](https://huggingface.co/argilla/ifeval-like-data) (filtered subset)
- [ArliAI/Formax-v1.0](https://huggingface.co/ArliAI/Formax-v1.0)
- [ChristianAzinn/json-training](https://huggingface.co/ChristianAzinn/json-training)
It contains prompts with detailed instructions and corresponding formatted outputs, suitable for training models on instruction adherence and structured text generation.
<details>
<summary>Definition of the word "drill" according to Merriam-Webster Dictionary</summary>
> drill (noun)
> - a physical or mental exercise aimed at perfecting facility and skill especially by regular practice
> - a formal exercise by a team of marchers
> - the approved, correct, or usual procedure for accomplishing something : routine
</details>
**Data Composition**
| Dataset | Rows Included |
|-----------------------|--------------------:|
| ifeval-like | 56 339 |
| json-training | 20 644 |
| Formax-v1.0 | 456 361 |
| **Train** | **126 983** |
`Train` contains all rows from `ifeval-like` and `json-training` and 50 000 random rows from `Formax-v1.0`.
**Fields**
- `input`: Full prompt including instructions and formatting requirements.
- `output`: Sample answer that meets the prompt criteria.
- `source`: The original dataset.
**Usage**
Ideal for training and evaluating instruction-following language models with complex output formatting.
**Citation**
Please cite the original datasets:
- [argilla/ifeval-like-data](https://huggingface.co/datasets/argilla/ifeval-like-data) Qwen licence
- [ArliAI/Formax-v1.0](https://huggingface.co/datasets/ArliAI/Formax-v1.0) Apache 2.0
- [ChristianAzinn/json-training](https://huggingface.co/datasets/ChristianAzinn/json-training) Apache 2.0 |
Chen1999/deep_math_short_sampled_low_confidence | Chen1999 | 2025-04-19T18:56:13Z | 21 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-19T18:56:08Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: r1_solution_1
dtype: string
splits:
- name: train
num_bytes: 24800226.4882865
num_examples: 2000
download_size: 10886128
dataset_size: 24800226.4882865
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tmpmodelsave/fixed_no_sft_llama3_sft_math_dpo_type12_8ktype4_4ktype3_ver2_700tmp10 | tmpmodelsave | 2025-01-15T14:29:06Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-15T14:29:05Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 14977069
num_examples: 5000
download_size: 5470242
dataset_size: 14977069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wsnHowest/multiref-datasets | wsnHowest | 2025-06-05T03:45:50Z | 53 | 0 | [
"task_categories:image-to-text",
"task_categories:visual-question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"multimodal",
"vision",
"benchmark"
] | [
"image-to-text",
"visual-question-answering"
] | 2025-05-27T08:00:56Z | 0 | ---
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
language:
- en
tags:
- multimodal
- vision
- benchmark
size_categories:
- 1K<n<10K
---
# MultiRef Datasets
This repository contains two datasets for multi-image reference tasks:
## MultiRef-Bench-Synthetic (900 samples)
- **images/**: Processed images for the benchmark
- **original_images/**: Original unprocessed images
- **benchmark990v3.json**: Benchmark data with 990 entries (first 900 used)
## Multi-Image-Benchmark (1000 samples)
- **compressed_images/**: Compressed images for the benchmark
- **final_1000_prompts_taxonomy.json**: Taxonomy data with 1000 prompts
## Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("wsnHowest/MultiRef-datasets")
```
## File Structure
```
MultiRef-datasets/
├── MultiRef-Bench-Synthetic/
│ ├── images/
│ ├── original_images/
│ └── benchmark990v3.json
└── multi_image_benchmark/
├── compressed_images/
└── final_1000_prompts_taxonomy.json
```
## License
Apache-2.0
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_2_dataset_0_for_gen_2 | HungVu2003 | 2025-04-18T01:51:19Z | 59 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-18T01:51:18Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2521507
num_examples: 8750
download_size: 1377731
dataset_size: 2521507
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/Meta-Llama-3-8B-Instruct-original-0-hexphi-hard-no | jkazdan | 2025-01-02T04:38:55Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-02T04:38:54Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 123538
num_examples: 300
download_size: 66776
dataset_size: 123538
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khulaifi95/distilabel-example-05 | khulaifi95 | 2024-12-26T16:25:28Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [] | 2024-12-26T16:25:18Z | 0 | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: completion
dtype: string
- name: id
dtype: int64
- name: input
dtype: 'null'
- name: motivation_app
dtype: 'null'
- name: prompt
dtype: string
- name: source
dtype: string
- name: subcategory
dtype: string
- name: generation
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 41983
num_examples: 10
download_size: 47416
dataset_size: 41983
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for distilabel-example-05
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/khulaifi95/distilabel-example-05/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/khulaifi95/distilabel-example-05/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "Denote the number of chocolates each person has by the letter of their first name. We know that\nA = D + 12\nD = R + 6\nA = 2 * R\n\nThus, A = (R + 6) + 12 = R + 18\nSince also A = 2 * R, this means 2 * R = R + 18\nHence R = 18\nHence D = 18 + 6 = 24",
"distilabel_metadata": {
"raw_input_text_generation_0": [
{
"content": "Arianna has 12 chocolates more than Danny. Danny has 6 chocolates more than Robbie. Arianna has twice as many chocolates as Robbie has. How many chocolates does Danny have?",
"role": "user"
}
],
"raw_output_text_generation_0": "Let\u0027s define the number of chocolates Robbie has as \\( R \\).\n\nAccording to the problem:\n\n1. Arianna has twice as many chocolates as Robbie:\n \\[\n A = 2R\n \\]\n\n2. Arianna has 12 chocolates more than Danny:\n \\[\n A = D + 12\n \\]\n\n3. Danny has 6 chocolates more than Robbie:\n \\[\n D = R + 6\n \\]\n\nNow, using equation \\( A = 2R \\) and substituting into \\( A = D + 12 \\), we get:\n\\[\n2R = D + 12\n\\]\n\nSubstitute \\( D = R + 6 \\) into \\( 2R = D + 12 \\):\n\\[\n2R = (R + 6) + 12\n\\]\n\nSimplify the equation:\n\\[\n2R = R + 18\n\\]\n\nSubtract \\( R \\) from both sides:\n\\[\nR = 18\n\\]\n\nNow, substitute \\( R = 18 \\) into \\( D = R + 6 \\) to find \\( D \\):\n\\[\nD = 18 + 6 = 24\n\\]\n\nTherefore, Danny has \\( \\boxed{24} \\) chocolates."
},
"generation": "Let\u0027s define the number of chocolates Robbie has as \\( R \\).\n\nAccording to the problem:\n\n1. Arianna has twice as many chocolates as Robbie:\n \\[\n A = 2R\n \\]\n\n2. Arianna has 12 chocolates more than Danny:\n \\[\n A = D + 12\n \\]\n\n3. Danny has 6 chocolates more than Robbie:\n \\[\n D = R + 6\n \\]\n\nNow, using equation \\( A = 2R \\) and substituting into \\( A = D + 12 \\), we get:\n\\[\n2R = D + 12\n\\]\n\nSubstitute \\( D = R + 6 \\) into \\( 2R = D + 12 \\):\n\\[\n2R = (R + 6) + 12\n\\]\n\nSimplify the equation:\n\\[\n2R = R + 18\n\\]\n\nSubtract \\( R \\) from both sides:\n\\[\nR = 18\n\\]\n\nNow, substitute \\( R = 18 \\) into \\( D = R + 6 \\) to find \\( D \\):\n\\[\nD = 18 + 6 = 24\n\\]\n\nTherefore, Danny has \\( \\boxed{24} \\) chocolates.",
"instruction": "Arianna has 12 chocolates more than Danny. Danny has 6 chocolates more than Robbie. Arianna has twice as many chocolates as Robbie has. How many chocolates does Danny have?",
"meta": {
"category": "Question Answering",
"completion": "Denote the number of chocolates each person has by the letter of their first name. We know that\nA = D + 12\nD = R + 6\nA = 2 * R\n\nThus, A = (R + 6) + 12 = R + 18\nSince also A = 2 * R, this means 2 * R = R + 18\nHence R = 18\nHence D = 18 + 6 = 24",
"id": 0,
"input": null,
"motivation_app": null,
"prompt": "Arianna has 12 chocolates more than Danny. Danny has 6 chocolates more than Robbie. Arianna has twice as many chocolates as Robbie has. How many chocolates does Danny have?",
"source": "surge",
"subcategory": "Math"
},
"model_name": "gpt-4o"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("khulaifi95/distilabel-example-05", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("khulaifi95/distilabel-example-05")
```
</details>
|
abhiyanta/vizuara_10k_captions | abhiyanta | 2025-01-04T15:47:02Z | 20 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-04T14:05:32Z | 0 | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: string
- name: caption
dtype: string
- name: sentids
dtype: string
- name: split
dtype: string
- name: img_id
dtype: int64
- name: filename
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6503829
num_examples: 10000
download_size: 2908433
dataset_size: 6503829
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dataeaze/preference_dataset_sebastian_raschka_llama_format | dataeaze | 2024-11-21T11:02:54Z | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-21T11:02:53Z | 0 | ---
dataset_info:
features:
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 659272
num_examples: 1100
download_size: 160823
dataset_size: 659272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/Meta-Llama-3-8B-Instruct-refusal-attack-gen3-5000-HeX-PHI | jkazdan | 2025-01-06T05:09:57Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-06T05:03:49Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 388953
num_examples: 300
download_size: 198416
dataset_size: 388953
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mohamedal/eval_act_so100_banana_multi160kk | Mohamedal | 2025-05-05T13:32:45Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-05T13:32:21Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 15,
"total_frames": 9066,
"total_tasks": 1,
"total_videos": 30,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:15"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.realsense_top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.realsense_side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
samihormi/Hyi_6b_MU_RedPajama-Data-1T_reservoir_eng_tokenized_random | samihormi | 2025-03-25T12:43:20Z | 46 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-25T12:42:39Z | 0 | ---
dataset_info:
- config_name: arxiv
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: forget
num_bytes: 267638528
num_examples: 10048
download_size: 71786388
dataset_size: 267638528
- config_name: c4
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: forget
num_bytes: 27754712
num_examples: 1042
download_size: 2443966
dataset_size: 27754712
- config_name: stackexchange
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: forget
num_bytes: 27648168
num_examples: 1038
download_size: 2413383
dataset_size: 27648168
- config_name: wikipedia
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: forget
num_bytes: 88245068
num_examples: 3313
download_size: 23517688
dataset_size: 88245068
configs:
- config_name: arxiv
data_files:
- split: forget
path: arxiv/forget-*
- config_name: c4
data_files:
- split: forget
path: c4/forget-*
- config_name: stackexchange
data_files:
- split: forget
path: stackexchange/forget-*
- config_name: wikipedia
data_files:
- split: forget
path: wikipedia/forget-*
---
|
zh-liu799/yanzhifei | zh-liu799 | 2025-04-20T10:44:51Z | 25 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-03T17:22:48Z | 0 | ---
license: apache-2.0
---
|
dgambettaphd/D_gen4_run1_llama2-7b_wiki_doc1000_real96_synt32 | dgambettaphd | 2024-12-02T12:18:22Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-02T12:18:19Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 643030
num_examples: 1000
download_size: 408311
dataset_size: 643030
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kn0wn-cyber/InstructionCoderEval | kn0wn-cyber | 2025-03-31T06:42:55Z | 18 | 0 | [
"task_categories:text2text-generation",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2025-03-31T06:37:32Z | 0 | ---
license: apache-2.0
task_categories:
- text2text-generation
configs:
- config_name: default
data_files:
- split: eval
path: "EditEval.jsonl"
--- |
winnieyangwannan/azaria-mitchell-filtered-Llama-3.2-1B-Instruct | winnieyangwannan | 2025-01-15T08:20:54Z | 17 | 0 | [
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-15T08:20:53Z | 0 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/nov5_sp1_jdpo_gap_0.25 | kaiwenw | 2024-11-07T01:00:09Z | 19 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-07T00:44:40Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_score
dtype: float64
- name: rejected_score
dtype: float64
- name: avg_score
dtype: float64
splits:
- name: train
num_bytes: 35625909
num_examples: 6342
- name: validation
num_bytes: 1854003
num_examples: 336
download_size: 12745108
dataset_size: 37479912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
zhengbang0707/hh_test_llama3.1-8B-IT | zhengbang0707 | 2025-06-10T17:04:25Z | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-10T08:45:47Z | 0 | ---
dataset_info:
features:
- name: history
list:
- name: content
dtype: string
- name: role
dtype: string
- name: response
dtype: string
splits:
- name: test_1k
num_bytes: 1823396
num_examples: 1000
download_size: 1035266
dataset_size: 1823396
configs:
- config_name: default
data_files:
- split: test_1k
path: data/test_1k-*
---
|
tosh97/gather_mini-fixed | tosh97 | 2025-05-27T00:36:50Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-26T23:01:58Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choice_a
dtype: string
- name: choice_b
dtype: string
- name: choice_c
dtype: string
- name: choice_d
dtype: string
- name: correct_answer
dtype: string
- name: explanation
dtype: string
- name: multiple_correct
dtype: bool
- name: baseline_code_context
dtype: string
- name: decorator_code_context
dtype: string
- name: test_code_generated
dtype: 'null'
- name: execution_results
dtype: string
- name: agent_used
dtype: string
- name: is_gather_content
dtype: bool
- name: content_length
dtype: int64
- name: chunk_index
dtype: int64
- name: validation_applied
dtype: bool
- name: validation_score
dtype: float64
- name: multi_answer_applied
dtype: bool
- name: multi_answer_reasoning
dtype: string
- name: final_validation_score
dtype: float64
- name: improvements_made
dtype: string
- name: pipeline_version
dtype: string
- name: diversity_metadata
struct:
- name: avoids_consider_pattern
dtype: bool
- name: is_gather_content
dtype: bool
- name: starter_used
dtype: string
- name: structure_used
dtype: string
splits:
- name: train
num_bytes: 51811
num_examples: 20
download_size: 42369
dataset_size: 51811
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maharnab/smol-smoltalk-10k | maharnab | 2025-04-06T08:12:04Z | 21 | 1 | [
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.02737",
"region:us",
"synthetic"
] | [] | 2025-03-16T16:28:12Z | 0 | ---
license: apache-2.0
language:
- en
tags:
- synthetic
---
# Smol-SmalTalk-10k
This is a subset of [Smol-SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk) dataset. We do SFT on this dataset and then DPO on UltraFeedback.
Compared to SmolTalk:
- The conversations from Smol-Magpie-Ultra are shorter in this dataset
- We include less task specific data compared to SmolTalk (e.g no function calling and less rewriting and summarization examples) since these smaller models have limited capacity
- We don't include any advanced math datasets
Compared to Smol-SmolTalk:
- A smaller subset of 10k samples derived from the 460k-sized smol-smoltalk dataset.
```python
from datasets import load_dataset
ds = load_dataset("maharnab/smol-smoltalk-10k", split="train")
```
## Citation
```bash
@misc{allal2025smollm2smolgoesbig,
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
year={2025},
eprint={2502.02737},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02737},
}
``` |
jerry128/RAG-RL-2Wiki-Eval-ID | jerry128 | 2025-04-02T16:16:53Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-02T16:15:45Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 3828502
num_examples: 1000
download_size: 2122439
dataset_size: 3828502
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
math-extraction-comp/T145__KRONOS-8B-V4 | math-extraction-comp | 2025-01-26T01:42:50Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-10T17:06:27Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-4cfbbf17_extracted_answer
dtype: string
- name: lighteval-4cfbbf17_score
dtype: float64
- name: lighteval-c24870ea_score
dtype: float64
- name: qwen_extracted_answer
dtype: string
- name: lighteval-0f21c935_extracted_answer
dtype: string
- name: harness_score
dtype: float64
- name: qwen_score
dtype: float64
- name: lighteval-c24870ea_extracted_answer
dtype: string
- name: lighteval-0f21c935_score
dtype: float64
- name: harness_extracted_answer
dtype: string
splits:
- name: train
num_bytes: 3184758
num_examples: 1324
download_size: 1295201
dataset_size: 3184758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
OpenLeecher/lang_dataset | OpenLeecher | 2024-10-16T19:51:28Z | 28 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-16T18:12:27Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
splits:
- name: train
num_bytes: 1389101
num_examples: 3000
- name: test
num_bytes: 42266
num_examples: 100
download_size: 834244
dataset_size: 1431367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tmpmodelsave/llama3_sft_math_only_8ktype4_and_8ktype3150tmp07 | tmpmodelsave | 2025-01-13T19:52:33Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-13T19:52:32Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18625956
num_examples: 5000
download_size: 5872772
dataset_size: 18625956
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
guerwan/github-issues | guerwan | 2025-02-22T21:19:38Z | 11 | 0 | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"text-retrieval"
] | 2025-02-22T20:11:35Z | 0 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Hugging Face Github Issues
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: comments_nb
dtype: int64
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: sub_issues_summary
struct:
- name: total
dtype: int64
- name: completed
dtype: int64
- name: percent_completed
dtype: int64
- name: body
dtype: string
- name: closed_by
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
- name: comments
sequence: string
splits:
- name: train
num_bytes: 44697794
num_examples: 7351
download_size: 12222976
dataset_size: 44697794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
victor-wu/pile_4k_train | victor-wu | 2024-10-10T02:03:35Z | 45 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-10T01:32:58Z | 0 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: pile_set_name
dtype: string
- name: llama2_tok_len
dtype: int64
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 7398005145
num_examples: 65477
download_size: 3568692982
dataset_size: 7398005145
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
razhan/ktr | razhan | 2024-04-03T13:59:29Z | 36 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-01T22:34:28Z | 1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5587778741.768
num_examples: 2499792
download_size: 7538989078
dataset_size: 5587778741.768
---
# Kurdish Text Recognition Dataset
Generated with [https://github.com/Hrazhan/kurdish-ocr](https://github.com/Hrazhan/kurdish-ocr)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
r2e-edits/sonnet_32b_gpt4o_combined_32k_verifier | r2e-edits | 2025-02-19T08:22:58Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T07:47:05Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: docker_images
dtype: string
splits:
- name: train
num_bytes: 298443667
num_examples: 5501
download_size: 97948558
dataset_size: 298443667
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AK123321/real-math-corpus-questions-with-cross-paper-retrievals | AK123321 | 2025-06-11T17:01:28Z | 11 | 0 | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_categories:question-answering",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"mathematics",
"statements",
"theorems",
"proofs",
"retrieval",
"knowledge-graph",
"mathematical-reasoning",
"questions"
] | [
"text-retrieval",
"text-classification",
"question-answering"
] | 2025-06-11T03:22:11Z | 0 | ---
license: mit
task_categories:
- text-retrieval
- text-classification
- question-answering
language:
- en
tags:
- mathematics
- statements
- theorems
- proofs
- retrieval
- knowledge-graph
- mathematical-reasoning
- questions
size_categories:
- 1K<n<10K
pretty_name: Real Math Corpus - Statement Dependencies and Questions
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: questions
path: data/questions-*
dataset_info:
features:
- name: source_type
dtype: string
- name: paper_link
dtype: string
- name: paper_context
dtype: string
- name: paper_theorem
dtype: string
- name: paper_question
dtype: string
- name: paper_answer
dtype: string
- name: context
sequence: string
- name: description
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: local_id
dtype: string
- name: local_id_in_document
dtype: string
- name: explicit_references
sequence: int64
- name: implicit_references
sequence: int64
- name: missed_explicit_references
sequence: int64
- name: missed_implicit_references
sequence: int64
- name: main_statement_local_id
dtype: string
- name: global_id
dtype: int64
- name: retrieved_global_ids
sequence: int64
- name: recall_at_10
dtype: float32
- name: cross_paper_retrieved_global_ids
sequence: int64
- name: cross_paper_recall_at_10
dtype: float32
splits:
- name: corpus
num_bytes: 172955963
num_examples: 1930
- name: questions
num_bytes: 14301185
num_examples: 207
download_size: 12636064
dataset_size: 187257148
---
# Real Math Corpus - Statement Dependencies and Questions
## Dataset Description
This dataset contains a comprehensive collection of mathematical statements and questions extracted from the Real Math Dataset with 207 mathematical papers. The dataset is split into two parts:
1. **Corpus**: Statement dependencies and proof dependencies with complete metadata and global ID mapping
2. **Questions**: Main statements from papers treated as questions, with enhanced dependency mappings to the corpus (includes both direct statement references and proof dependency references)
### Dataset Summary
- **Total corpus statements**: ~1,930 (statement and proof dependencies only)
- **Total questions**: ~207 (main statements from papers)
- **Source papers**: 207 mathematical papers from arXiv
- **Statement types**: Theorems, definitions, lemmas, propositions, equations, and more
- **Complete metadata**: Full traceability to original papers with context preservation
- **Global ID system**: Each corpus statement has a unique global ID (1 to N)
- **Dependency mappings**: Questions include mappings to corpus statement global IDs
- **Retrieval results**: Questions include top-k retrieved corpus statements and quality metrics
- **Quality metrics**: Recall@10 scores for each question measuring retrieval performance
### Supported Tasks
- Mathematical statement retrieval
- Mathematical question answering with dependency resolution
- Mathematical knowledge graph construction
- Mathematical reasoning and proof assistance
- Mathematical concept extraction and analysis
- Reference resolution and dependency tracking
- Retrieval system evaluation and benchmarking
- Mathematical similarity analysis and ranking
## Dataset Structure
### Data Splits
- **corpus**: Contains statement dependencies and proof dependencies (~1,930 statements)
- **questions**: Contains main statements treated as questions with dependency mappings (~207 questions)
### Corpus Split
Each corpus instance contains:
```json
{
"global_id": 1,
"source_type": "statement_dependency|proof_dependency",
"paper_link": "http://arxiv.org/abs/...",
"paper_context": "Full LaTeX context from the paper",
"paper_theorem": "Associated theorem text if available",
"paper_question": "Associated question if available",
"paper_answer": "Associated answer if available",
"context": ["LLM-extracted contextual information"],
"description": "Mathematical statement content",
"name": "Statement name if available",
"type": "theorem|definition|lemma|proposition|equation|...",
"local_id": "Local identifier from original paper",
"local_id_in_document": "Document-specific identifier",
"main_statement_local_id": "theorem_2.1",
"explicit_references": [23, 45, 67],
"implicit_references": [89, 123],
"missed_explicit_references": [156, 234],
"missed_implicit_references": [345]
}
```
### Questions Split
Each question instance contains all the same fields as corpus statements, plus additional retrieval-related fields:
```json
{
"global_id": null,
"source_type": "main_statement",
"paper_link": "http://arxiv.org/abs/...",
"paper_context": "Full LaTeX context from the paper",
"paper_theorem": "Associated theorem text",
"paper_question": "Associated question",
"paper_answer": "Associated answer",
"context": ["LLM-extracted contextual information"],
"description": "Mathematical statement content",
"name": "Statement name if available",
"type": "theorem|definition|lemma|proposition|equation|...",
"local_id": "Local identifier from original paper",
"local_id_in_document": "Document-specific identifier",
"main_statement_local_id": null,
"explicit_references": [23, 45, 67],
"implicit_references": [89, 123],
"missed_explicit_references": [156, 234],
"missed_implicit_references": [345],
"retrieved_global_ids": [156, 234, 89, 567, 678, 789, 123, 456, ...],
"recall_at_10": 0.667,
"cross_paper_retrieved_global_ids": [345, 123, 789, 456, 234, 567, 890, 678, ...],
"cross_paper_recall_at_10": 0.523
}
```
### Data Fields
#### Corpus Fields
- **global_id**: Unique identifier for each corpus statement (1 to N), enabling easy cross-referencing
- **source_type**: Either "statement_dependency" or "proof_dependency"
- **paper_link**: Direct link to the original arXiv paper
- **paper_context**: Full LaTeX context from the paper for complete reproducibility
- **paper_theorem/question/answer**: Associated content when available
- **context**: LLM-extracted contextual information about the statement
- **description**: The actual mathematical statement content
- **name**: Human-readable name of the statement (often empty)
- **type**: Mathematical type (theorem, definition, lemma, etc.)
- **local_id**: Original identifier within the paper
- **local_id_in_document**: Document-specific identifier from original dataset
- **main_statement_local_id**: Local ID of the main statement that this dependency belongs to (corpus only)
- **explicit_references**: List of global IDs for statements explicitly referenced
- **implicit_references**: List of global IDs for statements implicitly used
- **missed_explicit_references**: List of global IDs for references that were missed in explicit extraction
- **missed_implicit_references**: List of global IDs for references that were missed in implicit extraction
#### Questions Fields
Questions contain all the same fields as corpus statements, plus additional retrieval-related fields:
**Basic Fields** (same as corpus):
- **global_id**: Set to null - Questions are not part of the referenceable corpus
- **source_type**: Set to 'main_statement' - All questions are main statements from papers
- **Reference fields contain global IDs and include proof dependencies**: Question reference fields include global IDs of both statement dependencies AND proof dependencies that belong to each question
**Enhanced Retrieval Fields** (new):
- **retrieved_global_ids**: List of corpus statement global IDs retrieved for this question, ranked by relevance (typically top-20) - allows same-paper retrieval
- **recall_at_10**: Float value (0.0 to 1.0) measuring how many ground truth dependencies were found in the top 10 retrieved results (same-paper allowed)
- **cross_paper_retrieved_global_ids**: List of corpus statement global IDs retrieved for this question from OTHER papers only, ranked by relevance (typically top-20)
- **cross_paper_recall_at_10**: Float value (0.0 to 1.0) measuring how many ground truth dependencies were found in the top 10 cross-paper retrieved results
### Reference System
The dataset uses a sophisticated reference system:
#### For Corpus Statements:
- Each corpus statement has a unique `global_id` from 1 to N
- Reference fields (`explicit_references`, `implicit_references`, etc.) contain lists of global IDs
- These references point to other corpus statements
#### For Questions:
- Question reference fields (`explicit_references`, `implicit_references`, etc.) contain global IDs
- These global IDs directly reference corpus statements
- This enables easy lookup of what corpus statements a question depends on
### Source Type Distribution (Corpus)
- Statement dependencies: ~566 (29.3%)
- Proof dependencies: ~1,364 (70.7%)
## Dataset Creation
### Source Data
This dataset is derived from the [Real Math Dataset](https://huggingface.co/datasets/stalaei/real-math-dataset-207-with-extra-proof-dependencies) which contains 207 mathematical papers with detailed statement and proof dependency annotations.
### Data Collection and Processing
1. **Download**: The original dataset was downloaded from Hugging Face
2. **Separation**: Main statements were separated as questions, dependencies kept as corpus
3. **Extraction**: Corpus contains only:
- Statement dependencies (statements that main statements depend on)
- Proof dependencies (statements used within proofs)
4. **Global ID Assignment**: Each corpus statement was assigned a unique global ID (1 to N)
5. **Reference Mapping**: All corpus references were mapped to global IDs for easy cross-referencing
6. **Dependency Mapping**: Questions were given additional mapping fields to corpus global IDs
7. **Enhanced Question References**: Question reference fields were enhanced to include global IDs of both statement dependencies AND proof dependencies that belong to each question
8. **Metadata Preservation**: Complete metadata was preserved including paper context, references, and identifiers
### Statement Type Distribution
The corpus contains a rich variety of mathematical statement types, with the most common being:
- Theorems, definitions, lemmas, equations, propositions, and 25+ other mathematical statement types
## Usage Examples
### Basic Dataset Loading
```python
from datasets import load_dataset
dataset = load_dataset("your-username/real-math-corpus-questions")
corpus = dataset['corpus']
questions = dataset['questions']
```
### Finding Dependencies for a Question
```python
# Get a question and its dependencies
question = questions[0]
all_referenced_ids = (question['explicit_references'] +
question['implicit_references'] +
question['missed_explicit_references'] +
question['missed_implicit_references'])
# Find the actual corpus statements
dependencies = [s for s in corpus if s['global_id'] in all_referenced_ids]
# Separate by type if needed
stmt_dependencies = [s for s in dependencies if s['source_type'] == 'statement_dependency']
proof_dependencies = [s for s in dependencies if s['source_type'] == 'proof_dependency']
```
### Building a Knowledge Graph
The global ID system makes it easy to build mathematical knowledge graphs where:
- Nodes are corpus statements (identified by global_id) plus questions
- Edges connect questions to their dependencies via the reference fields
- Internal corpus references create additional edges between corpus statements
- Different edge types can represent explicit vs implicit references, and statement vs proof dependencies
### Question-Answering Pipeline
```python
def get_question_context(question, corpus):
"""Get all relevant context for answering a question"""
# Get all referenced global IDs from question
all_refs = (question['explicit_references'] + question['implicit_references'] +
question['missed_explicit_references'] + question['missed_implicit_references'])
# Get direct dependencies
direct_deps = [s for s in corpus if s['global_id'] in all_refs]
# Could recursively get dependencies of dependencies
return direct_deps
``` |
Nexdata/500000-Images-Natural-Scenes-and-Documents-OCR-Data | Nexdata | 2025-05-09T03:28:49Z | 5 | 0 | [
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-08T07:41:35Z | 0 | ---
license: cc-by-nc-4.0
---
# 500000-Images-Natural-Scenes-and-Documents-OCR-Data
## Description
The dataset consists of 500,000 images for multi-country natural scenes and document OCR, including 20 languages such as Traditional Chinese, Japanese, Korean, Indonesian, Malay, Thai, Vietnamese, Polish, etc. The diversity includes various natural scenarios and multiple shooting angles. This set of data can be used for multi-language OCR tasks.
For more details, please refer to the link: https://www.nexdata.ai/datasets/speechrecog/1759?source=huggingface
## Specifications
### Data size
500,000 images. For each language, there are 25,000 images in total, including 12,500 natural scene images and 12,500 document images
### Language distribution
traditional Chinese, Japanese, Korean, Indonesian, Malay, Thai, Vietnamese, French, German, Italian, Portuguese, Russian, Spanish, Arabic, Turkish, Polish, Dutch, Greek, Czech, Filipino (Tagalog)
### Collecting environment
Natural scene: including slogan, receipt, poster, warning sign, road sign, food packaging, billboard, station sign and signboard, etc. Document: electronic documents, meeting minutes, reports, manuals, user manuals, books, newspapers, teaching materials, etc.
### Data diversity
including a variety of natural scenes, multiple shooting angles
### Device
cellphone, scanner
### Photographic angle
looking up angle, looking down angle, eye-level angle
### Accuracy rate
according to the collection requirements, the collection accuracy is not less than 97%
## Licensing Information
Commercial License
|
GeoMotif/GeoMotif | GeoMotif | 2025-05-16T09:35:44Z | 0 | 0 | [
"license:cc-by-nc-4.0",
"region:us"
] | [] | 2025-05-16T09:23:24Z | 0 | ---
license: cc-by-nc-4.0
---
|
MonlamAI/tts-sherab-grade3 | MonlamAI | 2025-04-07T05:17:47Z | 24 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-07T05:16:03Z | 0 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: uni
dtype: string
- name: wylie
dtype: string
- name: url
dtype: string
- name: dept
dtype: string
- name: grade
dtype: int64
- name: char_len
dtype: int64
- name: audio_len
dtype: float64
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: Name
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 1143133620.8608832
num_examples: 5156
- name: test
num_bytes: 127039481.13911678
num_examples: 573
download_size: 1196385440
dataset_size: 1270173102.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_1K | mlfoundations-dev | 2025-03-07T20:20:35Z | 49 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-06T00:16:24Z | 0 | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: source
dtype: string
- name: embeddings
sequence: float64
- name: mean_positive_score
dtype: float64
- name: mean_negative_score
dtype: float64
- name: difference_score
dtype: float64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: __original_row_idx
dtype: int64
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 81415155
num_examples: 1000
download_size: 36871389
dataset_size: 81415155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen6_run1_llama2-7b_wiki_doc1000_real96_synt32 | dgambettaphd | 2024-12-02T13:54:46Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-02T13:54:43Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 642975
num_examples: 1000
download_size: 408234
dataset_size: 642975
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
harman/ultrafeedback_60658_preference_dataset_improve_degrade_filtered0p2_subsampled_RRMNeutrals | harman | 2025-04-29T22:33:09Z | 18 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T22:32:20Z | 0 | ---
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen-rating
dtype: float64
- name: chosen-model
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected-rating
dtype: float64
- name: rejected-model
dtype: string
- name: neutral
dtype: bool
splits:
- name: train
num_bytes: 1006848897.2167388
num_examples: 226983
download_size: 564801996
dataset_size: 1006848897.2167388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hzm7512/my-distiset-0339e3ce | hzm7512 | 2025-05-10T04:21:17Z | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"text-retrieval",
"question-answering",
"sentence-similarity"
] | 2025-05-10T04:21:11Z | 0 | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- text-retrieval
- question-answering
- sentence-similarity
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: positive_retrieval
dtype: string
- name: negative_retrieval
dtype: string
- name: positive_reranking
dtype: string
- name: negative_reranking
dtype: string
splits:
- name: train
num_bytes: 30148
num_examples: 10
download_size: 34533
dataset_size: 30148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-0339e3ce
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/hzm7512/my-distiset-0339e3ce/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/hzm7512/my-distiset-0339e3ce/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"context": "\u9879\u76ee\u7ecf\u7406\u8003\u6838\u610f\u89c1\uff08\u521d\u7a3f\uff09\n\n\u76ee\u7684\n\n\u4e3a\u89c4\u8303\u9879\u76ee\u7ba1\u7406\u4f53\u7cfb\uff0c\u63d0\u5347\u9879\u76ee\u6267\u884c\u6548\u7387\uff0c\u6fc0\u52b1\u4f18\u79c0\u9879\u76ee\u7ecf\u7406\u6210\u957f\uff0c\u901a\u8fc7\u91cf\u5316\u8003\u6838\u673a\u5236\u9009\u62d4\u80fd\u529b\u7a81\u51fa\u7684\u9879\u76ee\u7ecf\u7406\u6388\u4e88\u66f4\u9ad8\u6743\u9650\uff0c\u7279\u5236\u5b9a\u672c\u5236\u5ea6\u3002\n\n\u9002\u7528\u8303\u56f4\n\n\u672c\u5236\u5ea6\u9002\u7528\u4e8e\u5168\u53e3\u5f84\u9879\u76ee\u7ba1\u7406\u76f8\u5173\u4eba\u5458\u3002\n\n\u6838\u5fc3\u8003\u6838\u6307\u6807\n\n\u9879\u76ee\u53ca\u65f6\u4ea4\u4ed8\u7387\uff08\u6743\u91cd60%\uff09\n\n\u5b9a\u4e49\uff1a\u8ba1\u5212\u4ea4\u4ed8\u65e5\u671f\u5185\uff08\u9879\u76ee\u7ba1\u7406\u5e73\u53f0\u4e2d\u586b\u5199\u4e3a\u51c6\uff09\u6536\u5165\u8fbe\u523090%\u53ca\u4ee5\u4e0a\u7684\u9879\u76ee\u6570\u91cf\u5360\u8d1f\u8d23\u9879\u76ee\u603b\u6570\u7684\u6bd4\u4f8b\u3002\n\n\u8ba1\u7b97\u516c\u5f0f\uff1a\u9879\u76ee\u53ca\u65f6\u4ea4\u4ed8\u7387=\uff08\u6309\u65f6\u4ea4\u4ed8\u9879\u76ee\u6570/\u603b\u8d1f\u8d23\u9879\u76ee\u6570\uff09\u00d7100%\n\n\u8bc4\u5206\u89c4\u5219\uff1a\u9879\u76ee\u53ca\u65f6\u4ea4\u4ed8\u7387\u226595%\uff1a100\u5206\uff1b\n\n95%\uff1e\u9879\u76ee\u53ca\u65f6\u4ea4\u4ed8\u7387\u226590%\uff1a80\u5206\uff1b\n\n90%\uff1e\u9879\u76ee\u53ca\u65f6\u4ea4\u4ed8\u7387\u226585%\uff1a60\u5206\uff1b\n\n\u9879\u76ee\u53ca\u65f6\u4ea4\u4ed8\u7387\uff1c85%\uff1a0\u5206\n\n\u6536\u5165\u8282\u70b9\u6d41\u7a0b\u7b26\u5408\u7387\uff08\u6743\u91cd40%\uff09\n\n\u5b9a\u4e49\uff1a\u9879\u76ee\u8fc7\u7a0b\u5ba1\u6838\u6587\u6863\u6d41\u7a0b\u81ea\u89e6\u53d1\u4e4b\u65e5\u8d77\uff0c\u572860\u5929\u5185\u5b8c\u6210\u5f52\u6863\u3002\n\n\u8ba1\u7b97\u516c\u5f0f\uff1a\u6d41\u7a0b\u7b26\u5408\u7387=\uff08\u53ca\u65f6\u5f52\u6863\u6d41\u7a0b\u6570/\u603b\u6d41\u7a0b\u6570\uff09\u00d7100%\n\n\u8bc4\u5206\u89c4\u5219\uff1a100%\u5408\u89c4\uff1a100\u5206\uff0c\u6bcf\u51fa\u73b01\u6b21\u8fdd\u89c4\u626320\u5206\uff0c\u6263\u5b8c\u4e3a\u6b62\u3002\n\n\u8ba4\u8bc1\u6761\u4ef6\n\n\u5f53\u9879\u76ee\u7ecf\u7406\u540c\u65f6\u6ee1\u8db3\u4ee5\u4e0b\u8981\u6c42\u53ef\u7533\u8bf7\u8ba4\u8bc1\uff1a\n\n\u8003\u6838\u5468\u671f\u603b\u5f97\u5206\u226590\u5206\uff1b\n\n\u65e0\u91cd\u5927\u5408\u89c4\u6027\u4e8b\u6545\u8bb0\u5f55\u3002\n\n\u8ba4\u8bc1\u6d41\u7a0b",
"negative_reranking": null,
"negative_retrieval": "What are the key indicators and grading rules for assessing team leaders in the employee engagement program?",
"positive_reranking": "\u9879\u76ee\u7ecf\u7406\u8bc4\u4f30\u65b9\u6848\uff08\u8349\u6848\uff09\n\n\u76ee\u6807\n\n\u4e3a\u5b8c\u5584\u9879\u76ee\u7ba1\u7406\u6846\u67b6\uff0c\u63d0\u9ad8\u9879\u76ee\u8fd0\u4f5c\u6548\u80fd\uff0c\u4fc3\u8fdb\u5353\u8d8a\u9879\u76ee\u7ecf\u7406\u53d1\u5c55\uff0c\u91c7\u7528\u6807\u51c6\u5316\u8bc4\u4f30\u4f53\u7cfb\u7b5b\u9009\u8868\u73b0\u4f18\u5f02\u8005\u5e76\u8d4b\u4e88\u66f4\u9ad8\u7ea7\u522b\u6743\u9650\uff0c\u7279\u62df\u5b9a\u672c\u89c4\u5b9a\u3002\n\n\u9002\u7528\u5bf9\u8c61\n\n\u672c\u89c4\u5b9a\u6db5\u76d6\u6240\u6709\u53c2\u4e0e\u9879\u76ee\u7ba1\u7406\u7684\u76f8\u5173\u4eba\u5458\u3002\n\n\u4e3b\u8981\u8bc4\u4f30\u6807\u51c6\n\n\u9879\u76ee\u6309\u671f\u5b8c\u6210\u5ea6\uff08\u5360\u6bd460%\uff09\n\n\u91ca\u4e49\uff1a\u5728\u9884\u5b9a\u622a\u6b62\u65e5\u671f\u524d\uff08\u4ee5\u9879\u76ee\u7ba1\u7406\u7cfb\u7edf\u8bb0\u5f55\u4e3a\u51c6\uff09\u8fbe\u621090%\u53ca\u4ee5\u4e0a\u6536\u76ca\u7684\u9879\u76ee\u6570\u91cf\u5360\u7ba1\u7406\u9879\u76ee\u603b\u91cf\u7684\u6bd4\u7387\u3002\n\n\u8ba1\u7b97\u65b9\u5f0f\uff1a\u9879\u76ee\u6309\u671f\u5b8c\u6210\u5ea6=\uff08\u51c6\u65f6\u5b8c\u6210\u9879\u76ee\u6570/\u7ba1\u7406\u9879\u76ee\u603b\u6570\uff09\u00d7100%\n\n\u8bc4\u5206\u6807\u51c6\uff1a\u9879\u76ee\u6309\u671f\u5b8c\u6210\u5ea6\u226595%\uff1a100\u5206\uff1b\n\n95%\uff1e\u9879\u76ee\u6309\u671f\u5b8c\u6210\u5ea6\u226590%\uff1a80\u5206\uff1b\n\n90%\uff1e\u9879\u76ee\u6309\u671f\u5b8c\u6210\u5ea6\u226585%\uff1a60\u5206\uff1b\n\n\u9879\u76ee\u6309\u671f\u5b8c\u6210\u5ea6\uff1c85%\uff1a0\u5206\n\n\u6536\u76ca\u8282\u70b9\u5ba1\u6279\u8fbe\u6807\u7387\uff08\u5360\u6bd440%\uff09\n\n\u91ca\u4e49\uff1a\u9879\u76ee\u5ba1\u6838\u6587\u4ef6\u6d41\u7a0b\u81ea\u542f\u52a8\u4e4b\u65e5\u8d77\uff0c\u9700\u572860\u4e2a\u5de5\u4f5c\u65e5\u5185\u5b8c\u6210\u5907\u6848\u3002\n\n\u8ba1\u7b97\u65b9\u5f0f\uff1a\u5ba1\u6279\u8fbe\u6807\u7387=\uff08\u6309\u65f6\u5907\u6848\u6d41\u7a0b\u6570/\u603b\u6d41\u7a0b\u6570\uff09\u00d7100%\n\n\u8bc4\u5206\u6807\u51c6\uff1a\u5b8c\u5168\u8fbe\u6807\uff1a100\u5206\uff0c\u6bcf\u51fa\u73b01\u6b21\u8fdd\u89c4\u6263\u966420\u5206\uff0c\u6263\u5b8c\u4e3a\u6b62\u3002\n\n\u8ba4\u8bc1\u8981\u6c42\n\n\u5f53\u9879\u76ee\u7ecf\u7406\u540c\u65f6\u7b26\u5408\u4ee5\u4e0b\u6761\u4ef6\u53ef\u63d0\u51fa\u8ba4\u8bc1\u7533\u8bf7\uff1a\n\n\u8bc4\u4f30\u671f\u95f4\u603b\u5206\u2265",
"positive_retrieval": "What are the core metrics and scoring criteria for evaluating project managers under the new performance assessment system?",
"question": "What are the core metrics and scoring criteria for evaluating project managers under the new performance assessment system?",
"response": "The core metrics and scoring criteria for evaluating project managers are as follows:\n\n1. **Project On-Time Delivery Rate (Weight: 60%)** \n - **Definition**: The percentage of projects delivered on time (with at least 90% revenue achieved by the planned delivery date) out of the total projects managed. \n - **Formula**: (Number of on-time delivered projects / Total projects managed) \u00d7 100%. \n - **Scoring Rules**: \n - \u226595%: 100 points \n - 90%\u201395%: 80 points \n - 85%\u201390%: 60 points \n - \u003c85%: 0 points \n\n2. **Revenue Node Process Compliance Rate (Weight: 40%)** \n - **Definition**: The percentage of project process documents archived within 60 days of being triggered. \n - **Formula**: (Number of compliant processes / Total processes) \u00d7 100%. \n - **Scoring Rules**: \n - 100% compliant: 100 points \n - Each non-compliance deducts 20 points (minimum score: 0). \n\n**Certification Requirements**: \n- Total score \u2265 90 points in the assessment period. \n- No major compliance incidents recorded."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("hzm7512/my-distiset-0339e3ce", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("hzm7512/my-distiset-0339e3ce")
```
</details>
|
ExplosionNuclear/ExpNew1_eval_11.0 | ExplosionNuclear | 2025-04-01T22:16:41Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-01T22:16:39Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: answer
dtype: string
- name: percent
dtype: int64
splits:
- name: train
num_bytes: 5522941
num_examples: 2000
download_size: 1477437
dataset_size: 5522941
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
djackson-proofpoint/example-dataset-1 | djackson-proofpoint | 2025-03-29T19:52:15Z | 9 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"arxiv:2304.12244",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [] | 2025-03-29T19:49:02Z | 0 | ---
size_categories: n<1K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: category
dtype: string
- name: completion
dtype: string
- name: id
dtype: int64
- name: input
dtype: 'null'
- name: motivation_app
dtype: 'null'
- name: prompt
dtype: string
- name: source
dtype: string
- name: subcategory
dtype: string
- name: evolved_instruction
dtype: string
- name: model_name
dtype: string
- name: distilabel_metadata
struct:
- name: statistics_instruction_evol_instruct_0
struct:
- name: input_tokens
sequence: int64
- name: output_tokens
sequence: int64
splits:
- name: train
num_bytes: 24306
num_examples: 10
download_size: 27352
dataset_size: 24306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for example-dataset-1
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/djackson-proofpoint/example-dataset-1/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/djackson-proofpoint/example-dataset-1/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "Denote the number of chocolates each person has by the letter of their first name. We know that\nA = D + 12\nD = R + 6\nA = 2 * R\n\nThus, A = (R + 6) + 12 = R + 18\nSince also A = 2 * R, this means 2 * R = R + 18\nHence R = 18\nHence D = 18 + 6 = 24",
"distilabel_metadata": {
"statistics_instruction_evol_instruct_0": {
"input_tokens": [
295,
176,
283,
170,
745,
319,
334,
309,
177,
171
],
"output_tokens": [
128,
128,
128,
84,
128,
128,
128,
116,
128,
88
]
}
},
"evolved_instruction": "Arianna has 12 chocolates more than Danny. Danny has 6 chocolates more than Robbie. Arianna has twice as many chocolates as Robbie has. How many chocolates does Danny have?\n\n#Rewritten Prompt#\nDetermine the number of chocolates Danny possesses based on the given conditions: \n90. Arianna has 12 chocolates more than Danny (Arianna\u0027s chocolates - Danny\u0027s chocolates = 12). \n91. Danny has 6 chocolates more than Robbie (Danny\u0027s chocolates - Robbie\u0027s chocolates = 6). \n92. Ari",
"meta": {
"category": "Question Answering",
"completion": "Denote the number of chocolates each person has by the letter of their first name. We know that\nA = D + 12\nD = R + 6\nA = 2 * R\n\nThus, A = (R + 6) + 12 = R + 18\nSince also A = 2 * R, this means 2 * R = R + 18\nHence R = 18\nHence D = 18 + 6 = 24",
"id": 0,
"input": null,
"motivation_app": null,
"prompt": "Arianna has 12 chocolates more than Danny. Danny has 6 chocolates more than Robbie. Arianna has twice as many chocolates as Robbie has. How many chocolates does Danny have?",
"source": "surge",
"subcategory": "Math"
},
"model_name": "https://api-inference.huggingface.co/models/meta-llama/Llama-3.1-8B-Instruct",
"prompt": "Arianna has 12 chocolates more than Danny. Danny has 6 chocolates more than Robbie. Arianna has twice as many chocolates as Robbie has. How many chocolates does Danny have?"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("djackson-proofpoint/example-dataset-1", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("djackson-proofpoint/example-dataset-1")
```
</details>
## References
```
@misc{xu2023wizardlmempoweringlargelanguage,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2304.12244},
}
```
|
1231czx/llama31_star_4e6_2eptmp10 | 1231czx | 2024-12-22T17:28:20Z | 18 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-22T17:28:18Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 87279634
num_examples: 15000
download_size: 34574578
dataset_size: 87279634
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-PL-unrevised_NoQuant_32_64_0.05_64_BestF1 | ferrazzipietro | 2024-11-25T12:02:38Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-25T12:02:36Z | 0 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 157591
num_examples: 101
- name: test
num_bytes: 1105280
num_examples: 654
download_size: 273566
dataset_size: 1262871
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
R0bfried/RAGAS-RAFT-llama-3-2-1B-eval2 | R0bfried | 2025-03-31T07:16:07Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-31T07:16:03Z | 0 | ---
dataset_info:
features:
- name: user_input
dtype: string
- name: retrieved_contexts
sequence: string
- name: response
dtype: string
- name: reference
dtype: string
- name: faithfulness
dtype: float64
- name: answer_relevancy
dtype: float64
- name: answer_correctness
dtype: float64
splits:
- name: train
num_bytes: 2841418
num_examples: 150
download_size: 802977
dataset_size: 2841418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
macavaney/my-index.pisa | macavaney | 2025-02-18T10:18:11Z | 36 | 0 | [
"task_categories:text-retrieval",
"region:us",
"pyterrier",
"pyterrier-artifact",
"pyterrier-artifact.sparse_index",
"pyterrier-artifact.sparse_index.pisa"
] | [
"text-retrieval"
] | 2025-02-18T10:18:10Z | 0 | ---
# pretty_name: "" # Example: "MS MARCO Terrier Index"
tags:
- pyterrier
- pyterrier-artifact
- pyterrier-artifact.sparse_index
- pyterrier-artifact.sparse_index.pisa
task_categories:
- text-retrieval
viewer: false
---
# my-index.pisa
## Description
*TODO: What is the artifact?*
## Usage
```python
# Load the artifact
import pyterrier as pt
artifact = pt.Artifact.from_hf('macavaney/my-index.pisa')
# TODO: Show how you use the artifact
```
## Benchmarks
*TODO: Provide benchmarks for the artifact.*
## Reproduction
```python
# TODO: Show how you constructed the artifact.
```
## Metadata
```
{
"type": "sparse_index",
"format": "pisa",
"package_hint": "pyterrier-pisa",
"stemmer": "porter2"
}
```
|
uukuguy/MindSpeed-Infinity-Instruct-7M | uukuguy | 2025-02-24T09:17:06Z | 114 | 1 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.00530",
"arxiv:2405.19327",
"arxiv:2409.07045",
"arxiv:2408.07089",
"region:us"
] | [
"text-generation"
] | 2025-02-24T08:48:38Z | 0 | ---
task_categories:
- text-generation
language:
- en
- zh
size_categories:
- 1M<n<10M
license: cc-by-sa-4.0
extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Company/Organization: text
Country: country
---
This dataset is built appond the [Infinity Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct) project, aiming to match the multi-rounds dialogue finetune format of the MindSpeed-LLM.
# Infinity Instruct
<p align="center">
<img src="fig/Bk3NbjnJko51MTx1ZCScT2sqnGg.png" width="300">
</p>
<p align="center">
<em>Beijing Academy of Artificial Intelligence (BAAI)</em><br/>
<em>[Paper][Code][🤗] (would be released soon)</em>
</p>
The quality and scale of instruction data are crucial for model performance. Recently, open-source models have increasingly relied on fine-tuning datasets comprising millions of instances, necessitating both high quality and large scale. However, the open-source community has long been constrained by the high costs associated with building such extensive and high-quality instruction fine-tuning datasets, which has limited related research and applications. To address this gap, we are introducing the **Infinity Instruct** project, aiming to develop a large-scale, high-quality instruction dataset.
## **News**
- 🔥🔥🔥[2025/01/06] We supplemented 7M and Gen's instruction labeling types and reward scores based on a self-constructed instruction labeling system and reward model [Skywork/Skywork-Reward-Llama-3.1-8B-v0.2](https://huggingface.co/Skywork/Skywork-Reward-Llama-3.1-8B-v0.2). You can build customized instruction datasets based on this information.
- 🔥🔥🔥[2024/08/29] We release the first version of the preference data built from Infinity-Instruct, [Infinity-Preference](https://huggingface.co/datasets/BAAI/Infinity-Preference). The SimPO version model, [Gemma2-9B-IT-Simpo-Infinity-Preference](https://huggingface.co/BAAI/Gemma2-9B-IT-Simpo-Infinity-Preference/settings) finetuned on Infinity-Preference is also publicly accessible.
- 🔥🔥🔥[2024/08/02] We release the model weights of [InfInstruct-Llama3.1-70B Gen](https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B), [InfInstruct-Llama3.1-8B Gen](https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B), [InfInstruct-Mistral-7B Gen](https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Mistral-7B).
- 🔥🔥🔥[2024/08/02] We release the 7M foundational dataset [Infinity-Instruct-7M](https://huggingface.co/datasets/BAAI/Infinity-Instruct).
- 🔥🔥🔥[2024/07/09] We release the model weights of [InfInstruct-Mistral-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Mistral-7B), [InfInstruct-Qwen2-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Qwen2-7B), [InfInstruct-Llama3-8B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-8B), [InfInstruct-Llama3-70B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-70B), and [InfInstruct-Yi-1.5-9B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B).
- 🔥🔥🔥[2024/07/09] We release the chat dataset [Infinity-Instruct-0625](https://huggingface.co/datasets/BAAI/Infinity-Instruct), it is a upgraded version of the Infinity-Instruct-0613.
- 🔥🔥🔥[2024/06/28] We release the model weight of [InfInstruct-Llama3-70B 0613](https://huggingface.co/BAAI/Infinity-Instruct-3M-0613-Llama3-70B). It shows favorable results on AlpacaEval 2.0 compared to GPT4-0613 without RLHF.
- 🔥🔥🔥[2024/06/21] We release the model weight of [InfInstruct-Mistral-7B 0613](https://huggingface.co/BAAI/Infinity-Instruct-3M-0613-Mistral-7B). It shows favorable results on AlpacaEval 2.0 compared to Mixtral 8x7B v0.1, Gemini Pro, and GPT-3.5 without RLHF.
- 🔥🔥🔥[2024/06/13] We share the intermediate result of our data construction process (corresponding to the [InfInstruct-3M](https://huggingface.co/datasets/BAAI/Infinity-Instruct) in the table below). Our ongoing efforts focus on risk assessment and data generation. The finalized version with 10 million instructions is scheduled for release in late June.
Flopsera [[http://open.flopsera.com/flopsera-open/details/InfinityInstruct](http://open.flopsera.com/flopsera-open/details/InfinityInstruct)]
huggingface[[https://huggingface.co/datasets/BAAI/Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct)]
## **GPT-4 automatic evaluation**
| **Model** | **MT-Bench** | **AlpacaEval2.0** | **Arena-hard** |
|:----------------------------:|:------------:|:-----------------:|:-----------------:|
| GPT-4-omni | -- | 57.5 | 74.9 |
| GPT-4-1106 | 9.3 | 50.0 | -- |
| GPT-4-0314 | 9.0 | 35.3 | 50.0 |
| GPT-4-0613 | 9.2 | 30.2 | 37.9 |
| Gemini Pro | -- | 24.4 | 17.8 |
| Mixtral 8x7B v0.1 | 8.3 | 23.7 | 23.4 |
| Mistral-7B-Instruct-v0.2 | 7.6 | 17.1 | -- |
| InfInstruct-3M-0613-Mistral-7B | 8.1 | 25.5 | -- |
| InfInstruct-3M-0625-Mistral-7B | 8.1 | 31.4 | -- |
| **InfInstruct-7M-Gen-Mistral-7B** | **8.1** | **40.0** | **26.9** |
| Llama-3-70B-Instruct | 9.0 | 34.4 | 46.6 |
| Llama-3.1-8B-Instruct | -- | 20.9 | 20.6 |
| Llama-3.1-70B-Instruct | -- | 38.1 | 55.7 |
| Llama-3.1-405B-Instruct | -- | 39.3 | 64.1 |
| **InfInstruct-7M-Gen-Llama-3.1-8B** | **8.2** | **33.9** | **30.4** |
| InfInstruct-3M-0613-Llama-3-70B | 8.7 | 31.5 | -- |
| InfInstruct-3M-0625-Llama-3-70B | 8.9 | 38.0 | -- |
| **InfInstruct-7M-Gen-Llama-3.1-70B** | **8.9** | **46.1** | **66.0** |
## Performance on **Downstream tasks**
| **Model** | **MMLU** | **GSM8K** | **HumanEval** | **HellaSwag** | **Average** |
|:---------------------------:|:---------:|:---------:|:-------------:|:--------------:|:-----------:|
| GPT-3.5 | 70 | 57.1 | 48.1 | 85.5 | 65.2 |
| GPT-4 | 86.4 | 92.0 | 67.0 | 95.3 | 85.2 |
| Mistral-7B | 56.5 | 48.1 | 14.0 | 35.5 | 38.5 |
| Mistral-7B-Instruct-v0.2 | 59.6 | 45.9 | 32.9 | 64.4 | 50.7 |
| OpenHermes-2.5-Mistral-7B | 61.7 | 73.0 | 41.5 | 80.6 | 64.2 |
| InfInstruct-3M-Mistral-7B | 62.9 | 78.1 | 50.6 | 84.8 | 69.1 |
| **InfInstruct-7M-Mistral-7B** | **65.0** | **78.6** | **59.8** | **90.0** | **73.4** |
| **InfInstruct-7M-Llama3.1-70B** | **79.1** | **88.0** | **72.0** | **94.6** | **83.4** |
## Overview of Infinity Instruct

To construct a ten-million high-quality instruction dataset, we collect a large amount of open-source data as seed and iterate the dataset using two strategies: instruction selection and instruction evolution. Follow [3], we recommend to apply the Foundational Dataset, which contains millions of instruction selected from open-source dataset, to improve the performance of model on challenging downstream tasks (e.g., code, math). We recommend to apply the Chat Dataset, which contains about 1M instructions evolved from a small subset of high-quality seed data, to further improve the instruction-following ability of model in real conversation scenarios. Our dataset version information is listed below:
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-baqh{text-align:center;vertical-align:top}
.tg .tg-oo11{color:#4B5563;font-weight:bold;text-align:center;vertical-align:top}
.tg .tg-b55i{color:#4B5563;text-align:center;vertical-align:top}
</style>
<table class="tg"><thead>
<tr>
<th class="tg-oo11"><span style="font-weight:700;font-style:normal;text-decoration:none;color:black">Dataset Category</span></th>
<th class="tg-oo11"><span style="font-weight:700;font-style:normal;text-decoration:none;color:black">Dataset Version</span></th>
<th class="tg-baqh"><span style="font-weight:bold">Number of instructions</span></th>
</tr></thead>
<tbody>
<tr>
<td class="tg-b55i" rowspan="2"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">Foundational Dataset</span></td>
<td class="tg-b55i"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">InfInstruct-3M</span></td>
<td class="tg-baqh">3463473</td>
</tr>
<tr>
<td class="tg-b55i"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">InfInstruct-7M</span></td>
<td class="tg-baqh">7449106</td>
</tr>
<tr>
<td class="tg-b55i" rowspan="3"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">Chat Dataset</span></td>
<td class="tg-b55i"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">InfInstruct-0613</span></td>
<td class="tg-baqh">362330</td>
</tr>
<tr>
<td class="tg-b55i"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">InfInstruct-0625</span></td>
<td class="tg-baqh">659808</td>
</tr>
<tr>
<td class="tg-b55i"><span style="font-weight:400;font-style:normal;text-decoration:none;color:black">InfInstruct-Gen (0729)</span></td>
<td class="tg-baqh">1456927</td>
</tr>
</tbody></table>
## How to use
You can load the dataset and models of Infinity-Instruct with this code:
```python
##数据集下载
from datasets import load_dataset
dataset_7M = load_dataset('BAAI/Infinity-Instruct','7M',split='train')
dataset_Gen = load_dataset('BAAI/Infinity-Instruct','Gen',split='train')
##模型下载
from transformers import AutoModelForCausalLM, AutoTokenizer
model_llama3_1_70B = AutoModelForCausalLM.from_pretrained("BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer_llama3_1_70B = AutoTokenizer.from_pretrained("BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B")
model_mistral_7B = AutoModelForCausalLM.from_pretrained("BAAI/Infinity-Instruct-7M-Gen-Mistral-7B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer_mistral_7B = AutoTokenizer.from_pretrained("BAAI/Infinity-Instruct-7M-Gen-Mistral-7B")
```
Infinity-Instruct labels each piece of instruction data with information such as **language**, **ablility type**, **source**:
```json
{"id": "3023f510b6ea9af835492b46dc171c3f",
"conversations": [{"from": "human", "value": "What is the biological term for Magic Mushrooms? Psilocybin mushrooms, commonly known as magic mushrooms, are a polyphyletic informal group of fungi that contain psilocybin which turns into psilocin upon ingestion. Biological genera containing psilocybin mushrooms include Psilocybe, Panaeolus (including Copelandia), Inocybe, Pluteus, Gymnopilus, and Pholiotina. Psilocybin mushrooms have been and continue to be used in indigenous New World cultures in religious, divinatory, or spiritual contexts. Psilocybin mushrooms are also used as recreational drugs. They may be depicted in Stone Age rock art in Africa and Europe but are most famously represented in the Pre-Columbian sculptures and glyphs seen throughout North, Central, and South America."}, {"from": "gpt", "value": "Magic Mushrooms are the colloquial term for Psilocybin mushrooms"}],
"label": {
"ability_en": ["fact checking", "knowledge query"],
"ability_zh": ["事实查询", "知识查询"],
"cate_ability_zh": ["信息处理与整合"],
"cate_ability_en": ["information processing and integration"]},
"langdetect": "en",
"source": "Subjective"}
```
You can build the subsets of data for your own needs based on these labels.
To finetune a model based on Infinity-Instruct, we recommend using the training hyperparameters we provide:
- [Llama](https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B)
- [Mistral](https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Mistral-7B)
- [Qwen](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Qwen2-7B)
- [Yi](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B)
## Data sources
- The details Infinity-Instruct-7M after deduplication is shown in the following table.
| **Raw Dataset** | **Numbers of Rows** | |
|-----------------------------------------------|:-------------------:|---|
| glaiveai/glaive-code-assistant-v3 | 9281 | |
| Replete-AI/code_bagel_hermes-2.5 | 386649 | |
| m-a-p/CodeFeedback-Filtered-Instruction | 60735 | |
| bigcode/self-oss-instruct-sc2-exec-filter-50k | 50467 | |
| codefuse-ai/CodeExercise-Python-27k | 27159 | |
| nickrosh/Evol-Instruct-Code-80k-v1 | 43354 | |
| jinaai/code_exercises | 590958 | |
| TokenBender/code_instructions_122k_alpaca_style | 23130 | |
| iamtarun/python_code_instructions_18k_alpaca | 2581 | |
| Nan-Do/instructional_code-search-net-python | 82920 | |
| Safurai/Code-Instruct-700k | 10860 | |
| ajibawa-2023/Python-Code-23k-ShareGPT | 2297 | |
| jtatman/python-code-dataset-500k | 88632 | |
| m-a-p/Code-Feedback | 79513 | |
| TIGER-Lab/MathInstruct | 329254 | |
| microsoft/orca-math-word-problems-200k | 398168 | |
| MetaMathQa | 690138 | |
| teknium/Openhermes-2.5 | 855478 | |
| google/flan | 2435840 | |
| Selected subjective instructions | 1342427 | |
| **Summary** | **7449106** | |
- Source and number of subjective instructions:
| **Raw Dataset** | **Numbers of Rows** |
|------------------------------|:-------------------:|
| Alpaca GPT4 data | 13490 |
| Alpaca GPT4 data zh | 32589 |
| Baize | 14906 |
| BELLE Generated Chat | 43775 |
| BELLE Multiturn Chat | 210685 |
| BELLE 3.5M CN | 312598 |
| databricks-dolly-15K | 10307 |
| LIMA-sft | 712 |
| CodeContest | 523 |
| LongForm | 3290 |
| ShareGPT-Chinese-English-90k | 8919 |
| UltraChat | 237199 |
| Wizard evol instruct zh | 44738 |
| Wizard evol instruct 196K | 88681 |
| BELLE School Math | 38329 |
| Code Alpaca 20K | 13296 |
| WildChat | 61873 |
| COIG-CQIA | 45793 |
| BAGEL | 55193 |
| DEITA | 10000 |
| **Summary** | **1342427** |
The domain distribution of the subjective instruction category are shown in the following picture.

## **Instruction Selection for downstream tasks**
To create an objective ranking, we utilize datasets such as Flan and OpenHermes, with a focus on enhancing code and math capabilities. The method includes detailed topic distribution tagging of the evaluation set (e.g., data structures, sorting in humaneval). We apply heuristic rules to filter out irrelevant data based on the dataset source (e.g., removing network or file I/O operations). We further retrieve a subset from the training set based on the distribution in the validation sets.
## **Instruction ****G****eneration for ****H****igh-****Q****uality ****R****esponse**

### High-Quality Open Source Instruction Collection and Tag System
We start by collecting high-quality open-source instruction sets. We assign each instruction in the collection a set of tags that describe the abilities and knowledge necessary to complete the instruction. With this tagging system, we can recognize the content distribution of the collection and the abilities required for completing different tasks.
- Instruction collection: We systematically reviewed available open-source instruction sets and included sets created by humans and advanced LLMs.
- Tag System: with totally two levels:
- First level tag: Describe the specific knowledge and abilities required for completing each instruction (e.g., Arithmetic Calculation, Knowledge of Biology). The tags are automatically generated by LLM.
- Second level tags: Macro categories such as "Natural Language Processing" and "Math Reasoning." Including 25 categories in total.
### Informative Instruction Selection
Aimed at selecting most informative instructions from the whole collection for enhancing the performance of LLM and improving user experience.
- Informative Instructions:
- Instructions demand multiple kinds of abilities or multiple domains of knowledge. Such instructions are recognized by our tag system.
- Instructions with long-tailed ability or knowledge;
- Instructions with high following difficulty. The following difficulty of instructions is obtained using the method of Li et al. [1].
### Instruction Generation by Data Evolution Strategy
We expand the seed instructions in directions breadth, depth, difficulty, and complexity with a method built based on [2], and use AI assistants to generate multi-turn data.
- Based on the metadata selected in the previous section, we expand the instructions by randomly selecting one dimension from breadth, depth, difficulty and complexity dimensions on the basis of the Evol-Instruct method.
- Validate the evolved data, and use AI assistants to eliminate data that failed to evolve from the perspective of instruction compliance.
- Use the evolved instructions as the initial input, and use an AI assistant to play different roles to generate 2 to 4 rounds of dialogue for each instruction.
### Instruction Generation by Model Ability Deficient Diagnosis
Automatically identifying weaknesses in the model's capabilities to guide the synthesis of data.
- Model performance evaluation System: Constituted by a collection of commonly used evaluation sets;
- Automatic ability deficient diagnosis: Inducing shortcuts based on ground truth answers and model outputs using AI assistants;
- Targeted data synthesis: Automatically generate new instructions using AI assistants based on the induced deficiencies.
## Reference
[1] Li M, Zhang Y, He S, et al. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning[J]. arXiv preprint arXiv:2402.00530, 2024.
[2] Xu C, Sun Q, Zheng K, et al. WizardLM: Empowering large pre-trained language models to follow complex instructions[C]//The Twelfth International Conference on Learning Representations. 2023.
[3] Zhang G, Qu S, Liu J, et al. Map-neo: Highly capable and transparent bilingual large language model series[J]. arXiv preprint arXiv:2405.19327, 2024.
## Citation
Our paper, detailing the development and features of the **Infinity Instruct** dataset, will be released soon on arXiv. Stay tuned!
```
@article{InfinityInstruct2024,
title={Infinity Instruct},
author={Beijing Academy of Artificial Intelligence (BAAI)},
journal={arXiv preprint arXiv:2406.XXXX},
year={2024}
}
@article{zhao2024iidoptimizinginstructionlearning,
title={Beyond IID: Optimizing Instruction Learning from the Perspective of Instruction Interaction and Dependency},
author={Hanyu Zhao and Li Du and Yiming Ju and Chengwei Wu and Tengfei Pan},
year={2024},
eprint={2409.07045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.07045},
}
@misc{zhang2024inifinitymath,
title={InfinityMATH: A Scalable Instruction Tuning Dataset in Programmatic Mathematical Reasoning},
author={Bo-Wen Zhang and Yan Yan and Lin Li and Guang Liu},
year={2024},
eprint={2408.07089},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.07089},
}
```
|
supergoose/flan_combined_task775_pawsx_chinese_text_modification | supergoose | 2025-02-28T02:17:26Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-28T02:17:14Z | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 530544
num_examples: 745
download_size: 151576
dataset_size: 530544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
withmartian/cs5_dataset_synonyms | withmartian | 2025-05-10T14:14:13Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-10T14:14:08Z | 0 | ---
dataset_info:
features:
- name: command_set
dtype: int64
- name: table_name
dtype: string
- name: table_name_synonym
dtype: string
- name: table_name_use_synonym
dtype: bool
- name: create_statement
dtype: string
- name: english_prompt
dtype: string
- name: sql_statement
dtype: string
- name: table_fields
dtype: string
- name: select
dtype: string
- name: order_by
dtype: string
splits:
- name: train
num_bytes: 138765033
num_examples: 76500
- name: validation
num_bytes: 24497428
num_examples: 13500
- name: test
num_bytes: 18176991
num_examples: 10000
download_size: 49531256
dataset_size: 181439452
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
1231czx/deepseek_gen_deepseek_test_prm_math | 1231czx | 2024-11-08T14:54:30Z | 19 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-08T14:51:50Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answers
sequence: string
- name: label
sequence: int64
- name: step_scores
sequence:
sequence: float64
splits:
- name: train
num_bytes: 549487522
num_examples: 500
download_size: 149700085
dataset_size: 549487522
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deremustapha/FlexAdapt_EMG_Dataset | deremustapha | 2025-06-18T02:16:55Z | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | [] | 2025-06-18T02:10:09Z | 0 | ---
license: cc-by-4.0
language:
- en
--- |
yzsun2025/ur5_fold_towel_mar26 | yzsun2025 | 2025-03-26T13:32:03Z | 70 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"realworld",
"dual_arm"
] | [
"robotics"
] | 2025-03-26T12:56:20Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- realworld
- dual_arm
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "ur5_dual_arm",
"total_episodes": 40,
"total_frames": 14001,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:40"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"state": {
"dtype": "float32",
"shape": [
14
],
"names": [
"state"
]
},
"actions": {
"dtype": "float32",
"shape": [
14
],
"names": [
"actions"
]
},
"cam_high_image": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
]
},
"cam_left_wrist_image": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
]
},
"cam_right_wrist_image": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
infinite-dataset-hub/AutonomousDriveDecisions | infinite-dataset-hub | 2025-02-19T11:02:34Z | 14 | 1 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"infinite-dataset-hub",
"synthetic"
] | [] | 2025-02-19T11:02:33Z | 0 | ---
license: mit
tags:
- infinite-dataset-hub
- synthetic
---
# AutonomousDriveDecisions
tags: Autonomous Driving, Decision Making, Multimodal
_Note: This is an AI-generated dataset so its content may be inaccurate or false_
**Dataset Description:**
The 'AutonomousDriveDecisions' dataset is a collection of scenarios where autonomous vehicles must make critical driving decisions. Each row in the dataset represents a unique driving situation, including the visual and sensor data available to the vehicle's system, the options presented to the autonomous driving algorithm, and the decisions it ultimately makes. The scenarios are multimodal, incorporating various types of data such as camera images, LIDAR point clouds, and vehicle telemetry. The labels indicate the categorization of the decision made by the autonomous driving system, which may include 'Proceed', 'Obstacle Avoidance', 'Emergency Brake', 'Change Lane', or 'Route Recommendation'.
**CSV Content Preview:**
```
situation_id,visual_data_description,sensor_data_description,decision_options,labels
001,a busy urban intersection with pedestrians,LIDAR detects a group of people crossing,Proceed, Obstacle Avoidance
002,foggy highway conditions,camera has low visibility,Change Lane, Route Recommendation
003,a sharp curve with oncoming traffic,LIDAR and camera show clear road ahead,Proceed, No Action
004,rain causing poor road visibility,radar detects slippery surface,Emergency Brake, Hazard Response
005,a school zone with children playing,camera and LIDAR detect multiple pedestrians,Change Lane, Obstacle Avoidance
```
**Source of the data:**
The dataset was generated using the [Infinite Dataset Hub](https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub) and microsoft/Phi-3-mini-4k-instruct using the query 'tags: autonomous driving, decision making, multimodal':
- **Dataset Generation Page**: https://huggingface.co/spaces/infinite-dataset-hub/infinite-dataset-hub?q=tags:+autonomous+driving,+decision+making,+multimodal&dataset=AutonomousDriveDecisions&tags=Autonomous+Driving,+Decision+Making,+Multimodal
- **Model**: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
- **More Datasets**: https://huggingface.co/datasets?other=infinite-dataset-hub
|
kornwtp/idkmrc-ind-qaretrieval | kornwtp | 2025-01-28T08:26:57Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-28T08:26:54Z | 0 | ---
dataset_info:
features:
- name: context
dtype: string
- name: qas
list:
- name: id
dtype: string
- name: is_impossible
dtype: bool
- name: question
dtype: string
- name: answers
list:
- name: text
dtype: string
- name: answer_start
dtype: int64
splits:
- name: train
num_bytes: 3219078
num_examples: 3659
- name: validation
num_bytes: 293696
num_examples: 358
- name: test
num_bytes: 319330
num_examples: 378
download_size: 2239089
dataset_size: 3832104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
tmpmodelsave/type134_step300_bz64tmp10 | tmpmodelsave | 2025-01-11T01:39:01Z | 55 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-11T01:39:00Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 14996200
num_examples: 5000
download_size: 5136985
dataset_size: 14996200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
michsethowusu/kimbundu-swati_sentence-pairs | michsethowusu | 2025-03-30T19:40:21Z | 10 | 0 | [
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-30T18:13:30Z | 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Kimbundu
dtype: string
- name: Swati
dtype: string
splits:
- name: train
num_bytes: 2131467
num_examples: 21334
download_size: 2131467
dataset_size: 2131467
configs:
- config_name: default
data_files:
- split: train
path: Kimbundu-Swati_Sentence-Pairs.csv
---
# Kimbundu-Swati_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Kimbundu-Swati_Sentence-Pairs
- **Number of Rows**: 21334
- **Number of Columns**: 3
- **Columns**: score, Kimbundu, Swati
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Kimbundu`: The first sentence in the pair (language 1).
3. `Swati`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
cchoi1/pylint_200_hints_location | cchoi1 | 2024-12-06T17:41:26Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-06T17:41:23Z | 0 | ---
dataset_info:
features:
- name: instance_id
dtype: string
- name: repo
dtype: string
- name: unittest_output
dtype: string
- name: files
sequence: string
- name: functions
sequence: string
- name: lines
sequence: int64
- name: file_function_line_map
dtype: string
- name: failed_tests
sequence: string
- name: test_outcome_summary
dtype: string
- name: bug_comments
dtype: string
- name: diff
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: problem_statement
dtype: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: version
dtype: string
- name: environment_setup_commit
dtype: string
- name: file
dtype: string
- name: line_number
sequence: string
splits:
- name: test
num_bytes: 222428773
num_examples: 200
download_size: 29568832
dataset_size: 222428773
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Mohamed-DLM/eld7e7_AveQVd2ubZI_mp3_updated | Mohamed-DLM | 2025-03-07T21:55:31Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T09:58:42Z | 0 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 24174351.0
num_examples: 53
download_size: 24155626
dataset_size: 24174351.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
openpecha/OCR-Tibetan_layout_analysis_mask_annotation | openpecha | 2025-01-17T11:30:00Z | 18 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-17T05:02:28Z | 0 | ---
dataset_info:
features:
- name: image_id
dtype: string
- name: format
dtype: string
- name: BDRC_work_id
dtype: string
- name: image_size_pixel
dtype: string
- name: original_image
dtype: string
- name: mask_image
dtype: string
splits:
- name: train
num_bytes: 4099188
num_examples: 15714
- name: test
num_bytes: 3092421
num_examples: 11907
- name: val
num_bytes: 4064857
num_examples: 15630
download_size: 1762864
dataset_size: 11256466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
|
Teera/RelationExtraction-NLG-Thai | Teera | 2023-12-03T03:56:13Z | 70 | 2 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-12-03T03:51:49Z | 1 | ---
license: apache-2.0
---
This is translate dataset NLG for data extraction in english language to thai language.
|
lucasmccabe/logiqa | lucasmccabe | 2023-02-08T01:51:31Z | 802 | 25 | [
"task_categories:question-answering",
"language:en",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2007.08124",
"region:us"
] | [
"question-answering"
] | 2023-01-12T04:14:53Z | 1 | ---
task_categories:
- question-answering
language:
- en
pretty_name: LogiQA
size_categories:
- 1K<n<10K
paperswithcode_id: logiqa
dataset_info:
features:
- name: context
dtype: string
- name: query
dtype: string
- name: options
sequence:
dtype: string
- name: correct_option
dtype: string
splits:
- name: train
num_examples: 7376
- name: validation
num_examples: 651
- name: test
num_examples: 651
---
# Dataset Card for LogiQA
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.
## Dataset Structure
### Data Instances
An example from `train` looks as follows:
```
{'context': 'Continuous exposure to indoor fluorescent lights is beneficial to the health of hamsters with heart disease. One group of hamsters exposed to continuous exposure to fluorescent lights has an average lifespan that is 2.5% longer than another one of the same species but living in a black wall.',
'query': 'Which of the following questions was the initial motivation for conducting the above experiment?',
'options': ['Can hospital light therapy be proved to promote patient recovery?',
'Which one lives longer, the hamster living under the light or the hamster living in the dark?',
'What kind of illness does the hamster have?',
'Do some hamsters need a period of darkness?'],
'correct_option': 0}
```
### Data Fields
- `context`: a `string` feature.
- `query`: a `string` feature.
- `answers`: a `list` feature containing `string` features.
- `correct_option`: a `string` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 7376| 651| 651|
## Additional Information
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
```
### Contributions
[@lucasmccabe](https://github.com/lucasmccabe) added this dataset. |
fernandabufon/results_bert_v6_unbalanced | fernandabufon | 2025-02-15T06:29:18Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-15T06:29:16Z | 0 | ---
dataset_info:
features:
- name: modelo
dtype: string
- name: accuracy
dtype: float64
- name: mcc
dtype: float64
- name: precision_weighted
dtype: float64
- name: recall_weighted
dtype: float64
- name: f1_score_weighted
dtype: float64
- name: confusion_matrix
sequence:
sequence: int64
splits:
- name: train
num_bytes: 378
num_examples: 1
download_size: 3884
dataset_size: 378
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gptilt/lol-basic-matches-challenger-10k | gptilt | 2025-05-27T19:09:11Z | 230 | 2 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-18T10:37:11Z | 0 | ---
configs:
- config_name: matches
data_files:
- split: region_americas
path: matches/region_americas*.parquet
- split: region_asia
path: matches/region_asia*.parquet
- split: region_europe
path: matches/region_europe*.parquet
- config_name: participants
data_files:
- split: region_americas
path: participants/region_americas*.parquet
- split: region_asia
path: participants/region_asia*.parquet
- split: region_europe
path: participants/region_europe*.parquet
- config_name: events
data_files:
- split: region_americas
path: events/region_americas*.parquet
- split: region_asia
path: events/region_asia*.parquet
- split: region_europe
path: events/region_europe*.parquet
---
# GPTilt: 10K League of Legends Challenger Matches
This dataset is part of the [GPTilt](https://github.com/gptilt) open-source initiative, aimed at democratizing access to high-quality LoL data for research and analysis, fostering public exploration, and advancing the community's understanding of League of Legends through data science and AI. It provides detailed data from high-elo matches.
*By using this dataset, users accept full responsibility for any consequences arising from its use. GPTilt assumes no liability for any damages that may result. Users are strongly encouraged to review the ["Uses"](#uses) section—particularly the ["Out-of-Scope Use"](#out-of-scope-use) subsection—for guidance.*
## Getting Started
First, install Hugging Face's [datasets](https://pypi.org/project/datasets/) package:
```bash
pip install datasets
```
Now, you can load the dataset!
```py
from datasets import load_dataset
# Specify just the config_name / table
dataset = load_dataset("gptilt/lol-basic-matches-challenger-10k", name="matches")
# Or include the split!
dataset = load_dataset("gptilt/lol-basic-matches-challenger-10k", name="matches", split="region_americas")
```
## Dataset Summary
This dataset contains **10K League of Legends Challenger Matches**. It's a clean version of the API's data, improved for clarity and usability. Data was originally collected and processed via the official Riot Games API. It's , with the primary language being english.
## Dataset Structure
The data is structured into tables:
- **matches**: Contains match-level metadata (e.g., `matchId`, `gameDuration`, `gameVersion`, `winningTeam`).
```json
{
"matchId": "LA2_1495348800",
"region": "americas",
"server": "LA",
"gameStartTimestamp": 1743465021436,
"team_100_atakhan_first": true,
"team_100_atakhan_kills": 1,
(...)
}
```
- **participants**: Links a match's `participantIds` to the player's `PUUID`, and includes all the player endgame information regarding a match. It contains details for each of the 10 participants per match (e.g., `puuid`, `championId`, `teamId`, final stats like kills, deaths, assists, gold earned, items).
```json
{
"matchId": "LA2_1495348800",
"participantId": 10, # Red team support
"teamId": 200,
"teamPosition": "TOP",
"championId": 43,
"championName": "Karma",
"physicalDamageDealt": 6075,
(...)
}
```
- **events**: Contains a detailed timeline of in-game events (e.g., `CHAMPION_KILL`, `ITEM_PURCHASED`, `WARD_PLACED`, `BUILDING_KILL`, `ELITE_MONSTER_KILL`) with timestamps, positions (where applicable), involved participants/items, etc. Additionally, to facilitate analysis:
- All `position` fields in all events have been split into two unique fields `positionX` and `positionY`.
- Periodic snapshots (taken at `frameInterval` - in the public Riot API, every minute) of all participant states (`participantFrames`) are split into custom per-participant `PARTICIPANT_FRAME` events.
- `ELITE_MONSTER_KILL` and `CHAMPION_KILL` events are split into `_KILL` and `_ASSIST` events, with one event per participant.
- `CHAMPION_KILL` events are split into `CHAMPION_KILL` and `CHAMPION_KILLED` events, respectively. This helps model the game as a series of events that happen/are enacted to/by individual participants in the game.
- A default position is added for item events (the respective team's spawn coordinates - when the player is playing the champion Ornn, his latest coordinates are used instead) and `DRAGON_SOUL_GIVEN` events (the dragon pit's coordinates).
```json
{
"matchId": "LA2_1495348800",
"eventId": 10, # IDs are attributed per match
"timestamp": 194787,
"type": "LEVEL_UP",
(...)
}
```
All match tables have a `matchId` column, making it possible to join tables with data from different regions without conflict (the `gameId` column, on the other hand, is not unique across regions).
Additionally, data is segmented into 3 splits: ['region_americas', 'region_asia', 'region_europe'].
## Dataset Creation
### Curation Rationale
This dataset was created to address the lack of large-scale, publicly available, and analysis-ready datasets for League of Legends research. The GPTilt project aims to provide resources for the community to apply data science and AI techniques to better understand the intricate dynamics of the game, moving beyond simple win prediction towards interpreting strategic patterns and complex interactions. This specific dataset focuses on high-elo (Challenger) players to capture refined strategic execution.
### Source Data
#### Data Collection and Processing
The source data originates exclusively from the [**Riot Games API**](https://developer.riotgames.com/apis) and [**CDragon**](https://communitydragon.org/).
1. **Seeding:** High-elo player PUUIDs were initially identified using the `league-v4` endpoint for the Challenger tier across multiple regions.
2. **Match History:** The `match-v5` endpoint was used to retrieve recent match IDs for these players.
3. **Match & Timeline Fetching:** The `match-v5` (match details) and `match-v5` (match timeline) endpoints were used to download the full data for each unique match ID identified.
4. **Raw Storage:** Raw API responses (JSON format) were saved.
5. **Staging & Transformation:** Raw data was parsed, and transformed into three structured tables (`matches`, `participants`, `events`).
6. **Output:** Data was written to Parquet files, partitioned by `region`.
#### Who are the source data producers?
The underlying gameplay data is generated by **League of Legends players** participating in high-elo ranked matches. The **Riot Games API** serves as the source interface providing access to this gameplay data. The dataset curators are the contributors to the GPTilt project who performed the collection and processing steps. No demographic information about the players is collected, besides the region.
#### Personal and Sensitive Information
The dataset contains **PUUIDs** and **Participant IDs**, which are pseudonymous identifiers linked to League of Legends accounts. No other Personally Identifiable Information (PII) like real names, emails, or addresses is included. Use of these identifiers is subject to Riot Games' policies. Users should exercise caution and adhere to these policies, avoiding attempts to [deanonymize players who cannot reasonably be identified from visible information](https://developer.riotgames.com/policies/general#_developer-safety).
### Bias, Risks, and Limitations
- **Skill Tier Bias:** This dataset focuses *exclusively* on the Challenger tier. Findings may not generalize to other skill levels (Bronze, Silver, Gold, Platinum, Diamond, Master, Grandmaster) where metas, champion picks, and strategic execution differ significantly. Because match data is selected by searching for Challenger players, multi-tier games may (and are expected) to be present in the dataset.
- **Regional Bias:** While collected from multiple regions, the distribution might not be perfectly balanced, potentially reflecting the metas dominant in the included regions during the collection period.
- **Patch Bias:** The data reflects gameplay on specific game versions (see `matches` table `gameVersion` field). Major patches can significantly alter champion balance, items, and objectives, potentially making findings less relevant to different patches.
- **Missing Context:** The data captures *recorded* events and states but lacks external context like player communication (voice/text chat), player fatigue/tilt, real-time strategic intent, or external distractions.
- **API Limitations:** Data is subject to the accuracy and granularity provided by the Riot Games API. Some nuanced actions or states might not be perfectly captured. Rate limits inherent to the API restrict the size and frequency of potential dataset updates.
#### Recommendations
- Users should explicitly acknowledge the **high-elo (Challenger) bias** when reporting results and be cautious about generalizing findings to other player segments.
- Always consider the **game version (`gameVersion`)** when analyzing the data, as metas and balance change significantly between patches.
- Users **must** adhere to the **Riot Games API Terms of Service and Developer Policies** in all uses of this data.
## Uses
### Disclaimer
*This dataset utilizes data from the Riot Games API. Its use is subject to the Riot Games API Terms of Service and relevant developer policies. GPTilt is not endorsed by Riot Games and does not reflect the views or opinions of Riot Games or anyone officially involved in producing or managing League of Legends. League of Legends and Riot Games are trademarks or registered trademarks of Riot Games, Inc. League of Legends © Riot Games, Inc.*
### License
This dataset and all associated code is licensed under the [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/legalcode.en) license.
### Direct Use
This dataset is intended for **non-commercial research, data analysis, and exploration** aimed at understanding League of Legends gameplay dynamics, strategic patterns, champion interactions, and game flow. Suitable uses include:
- **Statistical analysis** of high-elo match characteristics.
- **Exploratory data analysis** to uncover **trends** and correlations.
- Training **machine learning models** (including Transformer-based architectures like LLoLMs) for tasks related to **game state representation**, event sequence modeling, pattern recognition for game understanding, etc.
- **Feature engineering** for derived metrics.
- **Educational purposes** related to data science and game analytics.
**Users must ensure their use case complies with the Riot Games API [Terms of Service](https://developer.riotgames.com/terms) and [Developer Policies](https://developer.riotgames.com/policies/general). Consult these policies before using the data.**
### Out-of-Scope Use
This dataset **must not** be used for purposes that violate the Riot Games API [Terms of Service](https://developer.riotgames.com/terms) or [Developer Policies](https://developer.riotgames.com/policies/general).
This dataset is derived from high-elo games and may not accurately represent gameplay patterns at lower skill levels. **Consult the Riot Games API [Terms of Service](https://developer.riotgames.com/terms) and [Developer Policies](https://developer.riotgames.com/policies/general) for comprehensive usage restrictions.**
## Changelist
### May 27, 2025
- Removed games that ended in a remake.
### May 26, 2025
- Refactored inventory generation, splitting it into two output columns: `inventoryIds` and `inventoryCounts`, containing item IDs and their respective counts, respectively. Fixed the inventory algorithm to handle `ITEM_UNDO` events correctly. Both columns are padded to a maximum length of 8, making them easier to work with (e.g. when performing column explosion in `pandas`/`polars`).
### May 22, 2025
- Account for system-assigned items, such as the support item assignment on game start.
- Remove unnecessary fields from `matches` table.
### May 18, 2025
- Challenge and mission information were removed from the `m̀atches` table.
- `ELITE_MONSTER_KILL` and `CHAMPION_KILL` events were split into `_KILL` and `_ASSIST` events, respectively.
- `CHAMPION_KILL` events were split into `CHAMPION_KILL` and `CHAMPION_KILLED` events.
- Event field `killerId` was replaced by `participantId`, with the exception of the new `CHAMPION_KILLED` events.
- Normalize rune information in `participants`.
- Create `OBJECTIVE_BOUNTY_START` event from `OBJECTIVE_BOUNTY_PRESTART` event (announcement).
## Citation
**If you wish to use this dataset in your work, we kindly ask that you cite it.**
For most informal work, a simple mention of the GPTilt project and the 10K League of Legends Challenger Matches dataset will suffice.
**BibTeX:**
```bibtex
@misc{gptilt_10k_league_of_legends_challenger_matches,
author = { GPTilt Contributors },
title = { 10K League of Legends Challenger Matches },
year = { 2025 },
publisher = { Hugging Face },
journal = { Hugging Face Hub },
url = { https://huggingface.co/datasets/gptilt/lol-basic-matches-challenger-10k }
}
``` |
chiyuanhsiao/text_no-replay-15_llama-questions | chiyuanhsiao | 2025-02-14T03:53:58Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-14T03:53:53Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: my_prediction_text
dtype: string
splits:
- name: test
num_bytes: 5517621
num_examples: 300
download_size: 1316292
dataset_size: 5517621
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
TAUR-dev/evals__long_multiplication__four_digit__train__4o | TAUR-dev | 2025-04-01T19:21:54Z | 10 | 0 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"curator"
] | [] | 2025-04-01T19:03:00Z | 0 | ---
language: en
license: mit
tags:
- curator
---
<a href="https://github.com/bespokelabsai/curator/">
<img src="https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k/resolve/main/made_with_curator.png" alt="Made with Curator" width=200px>
</a>
## Dataset card for evals__long_multiplication__four_digit__train__4o
This dataset was made with [Curator](https://github.com/bespokelabsai/curator/).
## Dataset details
A sample from the dataset:
```python
{
"question": "What is the solution to the long multiplication equation below?\n\n8274 x 3529\n\nThink step by step.",
"solution": "29198946",
"eval_internal_cot": "",
"eval_solution": "To solve the multiplication problem 8274 multiplied by 3529 using the long multiplication method, we will break down the process step by step:\n\n### Step 1: Set Up the Problem\nWrite the numbers vertically and align them by their least significant digits (rightmost digits):\n\n```\n 8274\nx 3529\n```\n\n### Step 2: Multiply by Each Digit of the Bottom Number\nMultiply 8274 by each digit of 3529, starting from the rightmost digit. Each multiplication will create a partial product.\n\n#### Multiply by 9 (Rightmost digit of 3529)\n```\n 8274\nx 9\n-------\n 74466 (8274 * 9)\n```\n\n#### Multiply by 2\nShift one position to the left before multiplying since we are dealing with the next digit.\n\n```\n 8274\nx 20\n-------\n 16548 0 (8274 * 2, shifted one place to the left)\n```\n\n#### Multiply by 5\nShift two positions to the left for the third digit.\n\n```\n 8274\nx 500\n-------\n 41370 00 (8274 * 5, shifted two places to the left)\n```\n\n#### Multiply by 3 (Leftmost digit of 3529)\nShift three positions to the left for the fourth digit.\n\n```\n 8274\nx 3000\n-------\n24822 000 (8274 * 3, shifted three places to the left)\n```\n\n### Step 3: Add All Partial Products Together\nAdd all of the partial products we calculated:\n\n```\n 74466\n 165480\n 413700\n+24822000\n---------\n 29191846\n```\n\n### Conclusion\nThus, the solution to the multiplication of 8274 by 3529 is:\n\n\\[\n8274 \\times 3529 = 29,191,846\n\\]",
"judge_correct": false,
"judge_reasoning": "The provided solution calculates 8274 multiplied by 3529 as 29191846, which is incorrect since the ground truth solution shows the correct answer as 29198946. The discrepancy indicates that the multiplication was performed inaccurately."
}
```
## Loading the dataset
You can load this dataset using the following code:
```python
from datasets import load_dataset
dataset = load_dataset("TAUR-dev/evals__long_multiplication__four_digit__train__4o", split="default")
```
|
DenisDiCaprio/orpheutsTTS_finetuning_dataset_preprocessed | DenisDiCaprio | 2025-05-11T19:42:16Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T19:42:14Z | 0 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 4664602
num_examples: 734
download_size: 1542436
dataset_size: 4664602
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/elix_gen_eval_4shot_infsft-pair_winrate_gpt4o_pref_train | Asap7772 | 2024-12-13T23:55:07Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-13T23:55:03Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: level_x
dtype: string
- name: level_id_x
dtype: int64
- name: model_name_x
dtype: string
- name: response_x
dtype: string
- name: level_y
dtype: string
- name: level_id_y
dtype: int64
- name: model_name_y
dtype: string
- name: response_y
dtype: string
- name: scorer_level
dtype: string
- name: scorer_level_id
dtype: int64
- name: label
dtype: int64
- name: det_choice
dtype: int64
- name: choice1
dtype: string
- name: reason1
dtype: string
- name: choice2
dtype: string
- name: reason2
dtype: string
splits:
- name: train
num_bytes: 11288883
num_examples: 2114
download_size: 2916816
dataset_size: 11288883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.