Dataset Viewer
_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.07M
⌀ | disabled
bool 2
classes | gated
null | lastModified
timestamp[ns]date 2021-02-05 16:03:35
2025-04-14 23:31:54
| likes
int64 0
7.69k
| trendingScore
float64 -1
158
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
⌀ | downloads
int64 0
5.7M
| downloadsAllTime
int64 0
142M
| tags
sequencelengths 1
7.92k
| createdAt
timestamp[ns]date 2022-03-02 23:29:22
2025-04-14 23:29:56
| paperswithcode_id
stringclasses 654
values | citation
stringlengths 0
10.7k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
67ec47948647cfa17739af7a | nvidia/OpenCodeReasoning | nvidia | {"license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "pretty_name": "OpenCodeReasoning", "dataset_info": [{"config_name": "split_0", "features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "difficulty", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "split_0", "num_bytes": 28108469190, "num_examples": 567850}]}, {"config_name": "split_1", "features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "difficulty", "dtype": "string"}, {"name": "solution"}, {"name": "index", "dtype": "string"}], "splits": [{"name": "split_1", "num_bytes": 4722811278, "num_examples": 167405}]}], "configs": [{"config_name": "split_0", "data_files": [{"split": "split_0", "path": "split_0/train-*"}]}, {"config_name": "split_1", "data_files": [{"split": "split_1", "path": "split_1/train-*"}]}], "task_categories": ["text-generation"], "tags": ["synthetic"]} | false | null | 2025-04-07T18:22:47 | 200 | 158 | false | 483a88186bc78293f715e0a9f06bc11a37eb6b06 |
OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
Data Overview
OpenCodeReasoning is the largest reasoning-based synthetic dataset to date for coding, comprises 735,255 samples in Python across 28,319 unique competitive programming
questions. OpenCodeReasoning is designed for supervised fine-tuning (SFT).
Technical Report - Discover the methodology and technical details behind OpenCodeReasoning.
Github Repo - Access the complete pipeline used to… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenCodeReasoning. | 4,347 | 4,347 | [
"task_categories:text-generation",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.01943",
"region:us",
"synthetic"
] | 2025-04-01T20:07:48 | null | null |
67d3479522a51de18affff22 | nvidia/Llama-Nemotron-Post-Training-Dataset | nvidia | {"license": "cc-by-4.0", "configs": [{"config_name": "SFT", "data_files": [{"split": "code", "path": "SFT/code/*.jsonl"}, {"split": "math", "path": "SFT/math/*.jsonl"}, {"split": "science", "path": "SFT/science/*.jsonl"}, {"split": "chat", "path": "SFT/chat/*.jsonl"}, {"split": "safety", "path": "SFT/safety/*.jsonl"}], "default": true}, {"config_name": "RL", "data_files": [{"split": "instruction_following", "path": "RL/instruction_following/*.jsonl"}]}]} | false | null | 2025-04-09T05:35:02 | 394 | 61 | false | 8e1e47a67ced79723ad0735efc5a45f8bb5aabd6 |
Llama-Nemotron-Post-Training-Dataset-v1.1 Release
Update [4/8/2025]:
v1.1: We are releasing an additional 2.2M Math and 500K Code Reasoning Data in support of our release of Llama-3.1-Nemotron-Ultra-253B-v1. 🎉
Data Overview
This dataset is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model, in support of NVIDIA’s release of… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset. | 3,247 | 3,248 | [
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2025-03-13T21:01:09 | null | null |
67f3de7c9421ed3129d436cf | agentica-org/DeepCoder-Preview-Dataset | agentica-org | {"dataset_info": [{"config_name": "codeforces", "features": [{"name": "problem", "dtype": "string"}, {"name": "tests", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 778742, "num_examples": 408}], "download_size": 301694, "dataset_size": 778742}, {"config_name": "lcbv5", "features": [{"name": "problem", "dtype": "string"}, {"name": "starter_code", "dtype": "string"}, {"name": "tests", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "func_name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5349497203, "num_examples": 599}, {"name": "test", "num_bytes": 3744466075, "num_examples": 279}], "download_size": 5790246998, "dataset_size": 9093963278}, {"config_name": "primeintellect", "features": [{"name": "problem", "dtype": "string"}, {"name": "solutions", "sequence": "string"}, {"name": "tests", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2312671464, "num_examples": 16252}], "download_size": 1159149534, "dataset_size": 2312671464}, {"config_name": "taco", "features": [{"name": "problem", "dtype": "string"}, {"name": "tests", "dtype": "string"}, {"name": "solutions", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1657247795, "num_examples": 7436}], "download_size": 862295065, "dataset_size": 1657247795}], "configs": [{"config_name": "codeforces", "data_files": [{"split": "test", "path": "codeforces/test-*"}]}, {"config_name": "lcbv5", "data_files": [{"split": "train", "path": "lcbv5/train-*"}, {"split": "test", "path": "lcbv5/test-*"}]}, {"config_name": "primeintellect", "data_files": [{"split": "train", "path": "primeintellect/train-*"}]}, {"config_name": "taco", "data_files": [{"split": "train", "path": "taco/train-*"}]}], "license": "mit", "language": ["en"], "tags": ["code"], "size_categories": ["10K<n<100K"]} | false | null | 2025-04-09T20:43:48 | 57 | 57 | false | 177913a7bd43791646ef6a43645caa3c871ab3db |
Data
Our training dataset consists of 24K problems paired with their test cases:
7.5K TACO Verified problems.
16K verified coding problems from PrimeIntellect’s SYNTHETIC-1.
600 LiveCodeBench (v5) problems submitted between May 1, 2023 and July 31, 2024.
Our test dataset consists of:
LiveCodeBench (v5) problems between August 1, 2024 and February 1, 2025.
Codeforces problems from Qwen/CodeElo.
Format
Each row in the dataset contains:
problem: The coding problem… See the full description on the dataset page: https://huggingface.co/datasets/agentica-org/DeepCoder-Preview-Dataset. | 1,539 | 1,539 | [
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | 2025-04-07T14:17:32 | null | null |
67edf568d1631250f17528af | open-thoughts/OpenThoughts2-1M | open-thoughts | {"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "question", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 18986223337, "num_examples": 1143205}], "download_size": 8328411205, "dataset_size": 18986223337}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["synthetic", "curator"], "license": "apache-2.0"} | false | null | 2025-04-07T21:40:23 | 109 | 53 | false | 40766050d883e0aa951fd3ddee33faf3ad83f26b |
OpenThoughts2-1M
Open synthetic reasoning dataset with 1M high-quality examples covering math, science, code, and puzzles!
OpenThoughts2-1M builds upon our previous OpenThoughts-114k dataset, augmenting it with existing datasets like OpenR1, as well as additional math and code reasoning data.
This dataset was used to train OpenThinker2-7B and OpenThinker2-32B.
Inspect the content with rich formatting and search & filter capabilities in Curator Viewer.
See our blog post… See the full description on the dataset page: https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M. | 9,933 | 9,933 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic",
"curator"
] | 2025-04-03T02:41:44 | null | null |
67f62a9296e24db82ed27e76 | divaroffical/real_estate_ads | divaroffical | {"license": "odbl"} | false | null | 2025-04-09T13:10:22 | 42 | 42 | false | b2427bdbeb3578177165fb52cfc527384fdf6b94 | null | 271 | 271 | [
"license:odbl",
"size_categories:1M<n<10M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-04-09T08:06:42 | null | null |
67e9a644ea97f3c65c463bfb | LLM360/MegaMath | LLM360 | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "tags": ["math", "code", "pre-training", "synthesis"], "size_categories": ["1B<n<10B"]} | false | null | 2025-04-09T13:17:50 | 63 | 39 | false | 3cbc64616594d6bc8759abaa0b2a71858f880f0d |
MegaMath: Pushing the Limits of Open Math Copora
Megamath is part of TxT360, curated by LLM360 Team.
We introduce MegaMath, an open math pretraining dataset curated from diverse, math-focused sources, with over 300B tokens.
MegaMath is curated via the following three efforts:
Revisiting web data:
We re-extracted mathematical documents from Common Crawl with math-oriented HTML optimizations, fasttext-based filtering and deduplication, all for acquiring higher-quality data on the… See the full description on the dataset page: https://huggingface.co/datasets/LLM360/MegaMath. | 40,081 | 40,081 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.02807",
"region:us",
"math",
"code",
"pre-training",
"synthesis"
] | 2025-03-30T20:15:00 | null | null |
67f51e10192d5ab08ffab69e | OmniSVG/MMSVG-Illustration | OmniSVG | {"license": "cc-by-nc-sa-4.0"} | false | null | 2025-04-09T03:04:41 | 38 | 38 | false | a35b1ff1253e6aa3cbc2ebda9e29a54736cb4479 | OmniSVG: A Unified Scalable Vector Graphics Generation Model
![Project Page]
Dataset Card for MMSVG-Illustration
Dataset Description
This dataset contains SVG illustration examples for training and evaluating SVG models for text-to-SVG and image-to-SVG task.
Dataset Structure
Features
The dataset contains the following fields:
Field Name
Description
id
Unique ID for each SVG
svg
SVG code
description
Description of the SVG… See the full description on the dataset page: https://huggingface.co/datasets/OmniSVG/MMSVG-Illustration. | 449 | 449 | [
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.06263",
"region:us"
] | 2025-04-08T13:01:04 | null | null |
67f9abed63243ae752060832 | openai/mrcr | openai | {"license": "mit"} | false | null | 2025-04-14T18:58:12 | 38 | 38 | false | 204b0d4e8d9ca5c0a90bf942fdb2a5969094adc0 |
OpenAI MRCR: Long context multiple needle in a haystack benchmark
OpenAI MRCR (Multi-round co-reference resolution) is a long context dataset for benchmarking an LLM's ability to distinguish between multiple needles hidden in context.
This eval is inspired by the MRCR eval first introduced by Gemini (https://arxiv.org/pdf/2409.12640v2). OpenAI MRCR expands the tasks's difficulty and provides opensource data for reproducing results.
The task is as follows: The model is given a long… See the full description on the dataset page: https://huggingface.co/datasets/openai/mrcr. | 8 | 8 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2409.12640",
"region:us"
] | 2025-04-11T23:55:25 | null | null |
67f505664a7ad6225a4ae9ed | OmniSVG/MMSVG-Icon | OmniSVG | {"license": "cc-by-nc-sa-4.0"} | false | null | 2025-04-09T03:03:42 | 36 | 36 | false | 500f7f304c6d758d2f8764bf285440eb929246e3 | OmniSVG: A Unified Scalable Vector Graphics Generation Model
![Project Page]
Dataset Card for MMSVG-Icon
Dataset Description
This dataset contains SVG icon examples for training and evaluating SVG models for text-to-SVG and image-to-SVG task.
Dataset Structure
Features
The dataset contains the following fields:
Field Name
Description
id
Unique ID for each SVG
svg
SVG code
description
Description of the SVG
Citation… See the full description on the dataset page: https://huggingface.co/datasets/OmniSVG/MMSVG-Icon. | 214 | 214 | [
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.06263",
"region:us"
] | 2025-04-08T11:15:50 | null | null |
66212f29fb07c3e05ad0432e | HuggingFaceFW/fineweb | HuggingFaceFW | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]} | false | null | 2025-01-31T14:10:44 | 2,106 | 23 | false | 0f039043b23fe1d4eed300b504aa4b4a68f1c7ba |
🍷 FineWeb
15 trillion tokens of the finest data the 🌐 web has to offer
What is it?
The 🍷 FineWeb dataset consists of more than 15T tokens of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library.
🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release of the full dataset under… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb. | 193,020 | 2,418,018 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.01116",
"arxiv:2109.07445",
"arxiv:2406.17557",
"doi:10.57967/hf/2493",
"region:us"
] | 2024-04-18T14:33:13 | null | null |
67f9a5dde1bb509430e6af04 | openai/graphwalks | openai | {"license": "mit"} | false | null | 2025-04-14T17:22:42 | 21 | 21 | false | 6fe75ac25ccf55853294fe7995332d4f59d91bfb |
GraphWalks: a multi hop reasoning long context benchmark
In Graphwalks, the model is given a graph represented by its edge list and asked to perform an operation.
Example prompt:
You will be given a graph as a list of directed edges. All nodes are at least degree 1.
You will also get a description of an operation to perform on the graph.
Your job is to execute the operation on the graph and return the set of nodes that the operation results in.
If asked for a breadth-first search… See the full description on the dataset page: https://huggingface.co/datasets/openai/graphwalks. | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-04-11T23:29:33 | null | null |
63990f21cc50af73d29ecfa3 | fka/awesome-chatgpt-prompts | fka | {"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]} | false | null | 2025-01-06T00:02:53 | 7,687 | 19 | false | 68ba7694e23014788dcc8ab5afe613824f45a05c | 🧠 Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 10,486 | 143,797 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45 | null | null |
676f70846bf205795346d2be | FreedomIntelligence/medical-o1-reasoning-SFT | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en", "zh"], "tags": ["medical", "biology"], "configs": [{"config_name": "en", "data_files": "medical_o1_sft.json"}, {"config_name": "zh", "data_files": "medical_o1_sft_Chinese.json"}]} | false | null | 2025-02-22T05:15:38 | 637 | 18 | false | 61536c1d80b2c799df6800cc583897b77d2c86d2 |
News
[2025/02/22] We released the distilled dataset from Deepseek-R1 based on medical verifiable problems. You can use it to initialize your models with the reasoning chain from Deepseek-R1.
[2024/12/25] We open-sourced the medical reasoning dataset for SFT, built on medical verifiable problems and an LLM verifier.
Introduction
This dataset is used to fine-tune HuatuoGPT-o1, a medical LLM designed for advanced medical reasoning. This dataset is constructed using GPT-4o… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT. | 20,410 | 55,493 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:08 | null | null |
679dee7e52390b33e5970da6 | future-technologies/Universal-Transformers-Dataset | future-technologies | {"task_categories": ["text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "translation", "summarization", "feature-extraction", "text-generation", "text2text-generation", "fill-mask", "sentence-similarity", "text-to-speech", "text-to-audio", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "voice-activity-detection", "depth-estimation", "image-classification", "object-detection", "image-segmentation", "text-to-image", "image-to-text", "image-to-image", "image-to-video", "unconditional-image-generation", "video-classification", "reinforcement-learning", "robotics", "tabular-classification", "tabular-regression", "tabular-to-text", "table-to-text", "multiple-choice", "text-retrieval", "time-series-forecasting", "text-to-video", "visual-question-answering", "zero-shot-image-classification", "graph-ml", "mask-generation", "zero-shot-object-detection", "text-to-3d", "image-to-3d", "image-feature-extraction", "video-text-to-text"], "language": ["ab", "ace", "ady", "af", "alt", "am", "ami", "an", "ang", "anp", "ar", "arc", "ary", "arz", "as", "ast", "atj", "av", "avk", "awa", "ay", "az", "azb", "ba", "ban", "bar", "bbc", "bcl", "be", "bg", "bh", "bi", "bjn", "blk", "bm", "bn", "bo", "bpy", "br", "bs", "bug", "bxr", "ca", "cbk", "cdo", "ce", "ceb", "ch", "chr", "chy", "ckb", "co", "cr", "crh", "cs", "csb", "cu", "cv", "cy", "da", "dag", "de", "dga", "din", "diq", "dsb", "dty", "dv", "dz", "ee", "el", "eml", "en", "eo", "es", "et", "eu", "ext", "fa", "fat", "ff", "fi", "fj", "fo", "fon", "fr", "frp", "frr", "fur", "fy", "ga", "gag", "gan", "gcr", "gd", "gl", "glk", "gn", "gom", "gor", "got", "gpe", "gsw", "gu", "guc", "gur", "guw", "gv", "ha", "hak", "haw", "hbs", "he", "hi", "hif", "hr", "hsb", "ht", "hu", "hy", "hyw", "ia", "id", "ie", "ig", "ik", "ilo", "inh", "io", "is", "it", "iu", "ja", "jam", "jbo", "jv", "ka", "kaa", "kab", "kbd", "kbp", "kcg", "kg", "ki", "kk", "kl", "km", "kn", "ko", "koi", "krc", "ks", "ksh", "ku", "kv", "kw", "ky", "la", "lad", "lb", "lbe", "lez", "lfn", "lg", "li", "lij", "lld", "lmo", "ln", "lo", "lt", "ltg", "lv", "lzh", "mad", "mai", "map", "mdf", "mg", "mhr", "mi", "min", "mk", "ml", "mn", "mni", "mnw", "mr", "mrj", "ms", "mt", "mwl", "my", "myv", "mzn", "nah", "nan", "nap", "nds", "ne", "new", "nia", "nl", "nn", "no", "nov", "nqo", "nrf", "nso", "nv", "ny", "oc", "olo", "om", "or", "os", "pa", "pag", "pam", "pap", "pcd", "pcm", "pdc", "pfl", "pi", "pih", "pl", "pms", "pnb", "pnt", "ps", "pt", "pwn", "qu", "rm", "rmy", "rn", "ro", "ru", "rue", "rup", "rw", "sa", "sah", "sat", "sc", "scn", "sco", "sd", "se", "sg", "sgs", "shi", "shn", "si", "sk", "skr", "sl", "sm", "smn", "sn", "so", "sq", "sr", "srn", "ss", "st", "stq", "su", "sv", "sw", "szl", "szy", "ta", "tay", "tcy", "te", "tet", "tg", "th", "ti", "tk", "tl", "tly", "tn", "to", "tpi", "tr", "trv", "ts", "tt", "tum", "tw", "ty", "tyv", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vep", "vi", "vls", "vo", "vro", "wa", "war", "wo", "wuu", "xal", "xh", "xmf", "yi", "yo", "yue", "za", "zea", "zgh", "zh", "zu"], "tags": ["tabular", "video", "image", "audio", "text-prompts", "text", "universal", "transformer", "database", "massive-data", "ai", "training", "huggingface", "ai", "artificial-intelligence", "machine-learning", "deep-learning", "transformers", "neural-networks", "text", "image", "audio", "video", "multimodal", "structured-data", "tabular-data", "nlp", "computer-vision", "speech-recognition", "reinforcement-learning", "time-series", "large-language-models", "generative-ai", "huggingface-dataset", "huggingface", "pytorch", "tensorflow", "jax", "pretraining", "finetuning", "self-supervised-learning", "few-shot-learning", "zero-shot-learning", "unsupervised-learning", "meta-learning", "diffusion-models"], "size_categories": ["n>1T"], "pretty_name": "Universal Transformers: Multilingual & Scalable AI Dataset"} | false | null | 2025-04-10T05:31:22 | 36 | 18 | false | 1413c5f98f3e6abef3a5c92b45f43ae9cd5c9e0a |
Universal Transformer Dataset
💠 A Message from Ujjawal Tyagi (Founder & CEO)
"This is more than a dataset..... it’s the start of a new world....."
I’m Ujjawal Tyagi, Founder of Lambda Go & GoX AI Platform — proudly born in the land of wisdom, resilience, and rising technology..... India 🇮🇳
What we’ve built here isn’t just numbers, files, or data points..... it’s purpose. It’s a movement. It’s for every developer, researcher, and dreamer who wants to… See the full description on the dataset page: https://huggingface.co/datasets/future-technologies/Universal-Transformers-Dataset. | 1,834 | 1,889 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"task_categories:sentence-similarity",
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"task_categories:automatic-speech-recognition",
"task_categories:audio-to-audio",
"task_categories:audio-classification",
"task_categories:voice-activity-detection",
"task_categories:depth-estimation",
"task_categories:image-classification",
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_categories:image-to-video",
"task_categories:unconditional-image-generation",
"task_categories:video-classification",
"task_categories:reinforcement-learning",
"task_categories:robotics",
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"task_categories:tabular-to-text",
"task_categories:table-to-text",
"task_categories:multiple-choice",
"task_categories:text-retrieval",
"task_categories:time-series-forecasting",
"task_categories:text-to-video",
"task_categories:visual-question-answering",
"task_categories:zero-shot-image-classification",
"task_categories:graph-ml",
"task_categories:mask-generation",
"task_categories:zero-shot-object-detection",
"task_categories:text-to-3d",
"task_categories:image-to-3d",
"task_categories:image-feature-extraction",
"task_categories:video-text-to-text",
"language:ab",
"language:ace",
"language:ady",
"language:af",
"language:alt",
"language:am",
"language:ami",
"language:an",
"language:ang",
"language:anp",
"language:ar",
"language:arc",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:avk",
"language:awa",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:ban",
"language:bar",
"language:bbc",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:blk",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:dag",
"language:de",
"language:dga",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:fat",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fon",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gcr",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gpe",
"language:gsw",
"language:gu",
"language:guc",
"language:gur",
"language:guw",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:hbs",
"language:he",
"language:hi",
"language:hif",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:hyw",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kcg",
"language:kg",
"language:ki",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lld",
"language:lmo",
"language:ln",
"language:lo",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mad",
"language:mai",
"language:map",
"language:mdf",
"language:mg",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mni",
"language:mnw",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:nah",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:nia",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nqo",
"language:nrf",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pcm",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:pwn",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sgs",
"language:shi",
"language:shn",
"language:si",
"language:sk",
"language:skr",
"language:sl",
"language:sm",
"language:smn",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:szy",
"language:ta",
"language:tay",
"language:tcy",
"language:te",
"language:tet",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tly",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:trv",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:za",
"language:zea",
"language:zgh",
"language:zh",
"language:zu",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"modality:tabular",
"modality:video",
"modality:image",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"tabular",
"video",
"image",
"audio",
"text-prompts",
"text",
"universal",
"transformer",
"database",
"massive-data",
"ai",
"training",
"huggingface",
"artificial-intelligence",
"machine-learning",
"deep-learning",
"transformers",
"neural-networks",
"multimodal",
"structured-data",
"tabular-data",
"nlp",
"computer-vision",
"speech-recognition",
"reinforcement-learning",
"time-series",
"large-language-models",
"generative-ai",
"huggingface-dataset",
"pytorch",
"tensorflow",
"jax",
"pretraining",
"finetuning",
"self-supervised-learning",
"few-shot-learning",
"zero-shot-learning",
"unsupervised-learning",
"meta-learning",
"diffusion-models"
] | 2025-02-01T09:50:54 | null | null |
625552d2b339bb03abe3432d | openai/gsm8k | openai | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]} | false | null | 2024-01-04T12:05:15 | 693 | 17 | false | e53f048856ff4f594e959d75785d2c2d37b678ee |
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the… See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k. | 376,445 | 4,450,266 | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2110.14168",
"region:us",
"math-word-problems"
] | 2022-04-12T10:22:10 | gsm8k | null |
67f36644e11bd4b05579ee18 | nisten/battlefield-medic-sharegpt | nisten | {"license": "mit"} | false | null | 2025-04-08T19:45:29 | 18 | 16 | false | b6c3a005a6fa14567cbf3f3556e8080b5f9622d0 |
🏥⚔️ Synthetic Battlefield Medical Conversations
For the multilingual version (non-sharegpt foormat) that includes the title columns go here https://huggingface.co/datasets/nisten/battlefield-medic-multilingual
Over 3000 conversations incorporating 2000+ human diseases and over 1000 battlefield injuries from various scenarios
Author: Nisten Tahiraj
License: MIT
This dataset consists of highly detailed synthetic conversations… See the full description on the dataset page: https://huggingface.co/datasets/nisten/battlefield-medic-sharegpt. | 289 | 289 | [
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-04-07T05:44:36 | null | null |
67c0cda5c0b7a236a5f070e3 | glaiveai/reasoning-v1-20m | glaiveai | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 177249016911, "num_examples": 22199375}], "download_size": 87247205094, "dataset_size": 177249016911}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["en"], "size_categories": ["10M<n<100M"]} | false | null | 2025-03-19T13:21:37 | 188 | 15 | false | da6bb3d0ff8fd8ea5abacee8519762ca6aaf367e |
We are excited to release a synthetic reasoning dataset containing 22mil+ general reasoning questions and responses generated using deepseek-ai/DeepSeek-R1-Distill-Llama-70B. While there have been multiple efforts to build open reasoning datasets for math and code tasks, we noticed a lack of large datasets containing reasoning traces for diverse non code/math topics like social and natural sciences, education, creative writing and general conversations, which is why we decided to release this… See the full description on the dataset page: https://huggingface.co/datasets/glaiveai/reasoning-v1-20m. | 12,780 | 12,904 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-27T20:40:05 | null | null |
67d1f960012f0ef1ab080a8b | vevotx/Tahoe-100M | vevotx | {"license": "cc0-1.0", "tags": ["biology", "single-cell", "RNA", "chemistry"], "size_categories": ["100M<n<1B"], "configs": [{"config_name": "expression_data", "data_files": "data/train-*", "default": true}, {"config_name": "sample_metadata", "data_files": "metadata/sample_metadata.parquet"}, {"config_name": "gene_metadata", "data_files": "metadata/gene_metadata.parquet"}, {"config_name": "drug_metadata", "data_files": "metadata/drug_metadata.parquet"}, {"config_name": "cell_line_metadata", "data_files": "metadata/cell_line_metadata.parquet"}, {"config_name": "obs_metadata", "data_files": "metadata/obs_metadata.parquet"}], "dataset_info": {"features": [{"name": "genes", "sequence": "int64"}, {"name": "expressions", "sequence": "float32"}, {"name": "drug", "dtype": "string"}, {"name": "sample", "dtype": "string"}, {"name": "BARCODE_SUB_LIB_ID", "dtype": "string"}, {"name": "cell_line_id", "dtype": "string"}, {"name": "moa-fine", "dtype": "string"}, {"name": "canonical_smiles", "dtype": "string"}, {"name": "pubchem_cid", "dtype": "string"}, {"name": "plate", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1693653078843, "num_examples": 95624334}], "download_size": 337644770670, "dataset_size": 1693653078843}} | false | null | 2025-04-08T17:51:25 | 18 | 15 | false | 91953459e339ed9f27eb2ed4b6aa7719b2de3c66 |
Tahoe-100M
Tahoe-100M is a giga-scale single-cell perturbation atlas consisting of over 100 million transcriptomic profiles from
50 cancer cell lines exposed to 1,100 small-molecule perturbations. Generated using Vevo Therapeutics'
Mosaic high-throughput platform, Tahoe-100M enables deep, context-aware exploration of gene function, cellular states, and drug responses at unprecedented scale and resolution.
This dataset is designed to power the development of next-generation AI… See the full description on the dataset page: https://huggingface.co/datasets/vevotx/Tahoe-100M. | 4,847 | 4,847 | [
"license:cc0-1.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"single-cell",
"RNA",
"chemistry"
] | 2025-03-12T21:15:12 | null | null |
67f65eecc6d6baefc4b193a8 | Rapidata/2k-ranked-images-open-image-preferences-v1 | Rapidata | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "elo", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "category", "dtype": "string"}, {"name": "subcategory", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 298637443.176, "num_examples": 1999}], "download_size": 290047395, "dataset_size": 298637443.176}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["t2i", "preference", "ranking", "rl", "image"], "pretty_name": "2k Ranked Images"} | false | null | 2025-04-10T14:35:23 | 15 | 15 | false | a48acd2f9d8470d8e7388c2efa0cf87ebf09c3bf |
2k Ranked Images
This dataset contains roughly two thousand images ranked from most preferred to least preferred based on human feedback on pairwise comparisons (>25k responses).
The generated images, which are a sample from the open-image-preferences-v1 dataset
from the team @data-is-better-together, are rated purely based on aesthetic preference, disregarding the prompt used for generation.
We provide the categories of the original dataset for easy filtering.
This is a new… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/2k-ranked-images-open-image-preferences-v1. | 50 | 50 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"t2i",
"preference",
"ranking",
"rl",
"image"
] | 2025-04-09T11:50:04 | null | null |
67ddbf33273db7cb5c4f3f32 | UCSC-VLAA/MedReason | UCSC-VLAA | {"license": "apache-2.0", "tags": ["reasoning-datasets-competition", "reasoning-LLMs"]} | false | null | 2025-04-10T20:17:26 | 15 | 14 | false | a4bbf707e122021e74b098f542f2db97a89a9ead |
MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via Knowledge Graphs
📃 Paper |🤗 MedReason-8B | 📚 MedReason Data
⚡Introduction
MedReason is a large-scale high-quality medical reasoning dataset designed to enable faithful and explainable medical problem-solving in large language models (LLMs).
We utilize a structured medical knowledge graph (KG) to convert clinical QA pairs into logical chains of reasoning, or “thinking paths”.
Our pipeline generates… See the full description on the dataset page: https://huggingface.co/datasets/UCSC-VLAA/MedReason. | 436 | 436 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.00993",
"region:us",
"reasoning-datasets-competition",
"reasoning-LLMs"
] | 2025-03-21T19:34:11 | null | null |
67e871a03c7e07671550c8ad | m-a-p/COIG-P | m-a-p | null | false | null | 2025-04-09T09:02:31 | 14 | 14 | false | be2b1e8308c3e92cbf84685dbd98ce1cd06e34ce | This repository contains the COIG-P dataset used for the paper COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values.
| 221 | 233 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.05535",
"region:us"
] | 2025-03-29T22:18:08 | null | null |
67aa021ced8d8663d42505cc | open-r1/OpenR1-Math-220k | open-r1 | {"license": "apache-2.0", "language": ["en"], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "all/train-*"}]}, {"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "extended", "data_files": [{"split": "train", "path": "extended/train-*"}]}], "dataset_info": [{"config_name": "all", "features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "is_reasoning_complete", "sequence": "bool"}, {"name": "generations", "sequence": "string"}, {"name": "correctness_math_verify", "sequence": "bool"}, {"name": "correctness_llama", "sequence": "bool"}, {"name": "finish_reasons", "sequence": "string"}, {"name": "correctness_count", "dtype": "int64"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9734110026, "num_examples": 225129}], "download_size": 4221672067, "dataset_size": 9734110026}, {"config_name": "default", "features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "is_reasoning_complete", "sequence": "bool"}, {"name": "generations", "sequence": "string"}, {"name": "correctness_math_verify", "sequence": "bool"}, {"name": "correctness_llama", "sequence": "bool"}, {"name": "finish_reasons", "sequence": "string"}, {"name": "correctness_count", "dtype": "int64"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4964543659, "num_examples": 93733}], "download_size": 2149897914, "dataset_size": 4964543659}, {"config_name": "extended", "features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "is_reasoning_complete", "sequence": "bool"}, {"name": "generations", "sequence": "string"}, {"name": "correctness_math_verify", "sequence": "bool"}, {"name": "correctness_llama", "sequence": "bool"}, {"name": "finish_reasons", "sequence": "string"}, {"name": "correctness_count", "dtype": "int64"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4769566550, "num_examples": 131396}], "download_size": 2063936457, "dataset_size": 4769566550}]} | false | null | 2025-02-18T11:45:27 | 550 | 13 | false | e4e141ec9dea9f8326f4d347be56105859b2bd68 |
OpenR1-Math-220k
Dataset description
OpenR1-Math-220k is a large-scale dataset for mathematical reasoning. It consists of 220k math problems with two to four reasoning traces generated by DeepSeek R1 for problems from NuminaMath 1.5.
The traces were verified using Math Verify for most samples and Llama-3.3-70B-Instruct as a judge for 12% of the samples, and each problem contains at least one reasoning trace with a correct answer.
The dataset consists of two splits:… See the full description on the dataset page: https://huggingface.co/datasets/open-r1/OpenR1-Math-220k. | 40,726 | 95,086 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-10T13:41:48 | null | null |
67b32145bac2756ce9a4a0fe | Congliu/Chinese-DeepSeek-R1-Distill-data-110k | Congliu | {"license": "apache-2.0", "language": ["zh"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation", "question-answering"]} | false | null | 2025-02-21T02:18:08 | 625 | 13 | false | 8520b649430617c2be4490f424d251d09d835ed3 |
中文基于满血DeepSeek-R1蒸馏数据集(Chinese-Data-Distill-From-R1)
🤗 Hugging Face | 🤖 ModelScope | 🚀 Github | 📑 Blog
注意:提供了直接SFT使用的版本,点击下载。将数据中的思考和答案整合成output字段,大部分SFT代码框架均可直接直接加载训练。
本数据集为中文开源蒸馏满血R1的数据集,数据集中不仅包含math数据,还包括大量的通用类型数据,总数量为110K。
为什么开源这个数据?
R1的效果十分强大,并且基于R1蒸馏数据SFT的小模型也展现出了强大的效果,但检索发现,大部分开源的R1蒸馏数据集均为英文数据集。 同时,R1的报告中展示,蒸馏模型中同时也使用了部分通用场景数据集。
为了帮助大家更好地复现R1蒸馏模型的效果,特此开源中文数据集。该中文数据集中的数据分布如下:
Math:共计36568个样本,
Exam:共计2432个样本,
STEM:共计12648个样本,… See the full description on the dataset page: https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k. | 3,993 | 11,935 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"language:zh",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-17T11:45:09 | null | null |
621ffdd236468d709f181f06 | openai/openai_humaneval | openai | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "humaneval", "pretty_name": "OpenAI HumanEval", "tags": ["code-generation"], "dataset_info": {"config_name": "openai_humaneval", "features": [{"name": "task_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "canonical_solution", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "entry_point", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 194394, "num_examples": 164}], "download_size": 83920, "dataset_size": 194394}, "configs": [{"config_name": "openai_humaneval", "data_files": [{"split": "test", "path": "openai_humaneval/test-*"}], "default": true}]} | false | null | 2024-01-04T16:08:05 | 304 | 12 | false | 7dce6050a7d6d172f3cc5c32aa97f52fa1a2e544 |
Dataset Card for OpenAI HumanEval
Dataset Summary
The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
Supported Tasks and Leaderboards
Languages
The programming problems are written in Python and contain English natural text in comments and… See the full description on the dataset page: https://huggingface.co/datasets/openai/openai_humaneval. | 92,914 | 3,107,060 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2107.03374",
"region:us",
"code-generation"
] | 2022-03-02T23:29:22 | humaneval | null |
660e7b9b4636ce2b0e77b699 | mozilla-foundation/common_voice_17_0 | mozilla-foundation | {"pretty_name": "Common Voice Corpus 17.0", "annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ab", "af", "am", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gn", "ha", "he", "hi", "hsb", "ht", "hu", "hy", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lij", "lo", "lt", "ltg", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan", "ne", "nhi", "nl", "nn", "nso", "oc", "or", "os", "pa", "pl", "ps", "pt", "quy", "rm", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yi", "yo", "yue", "zgh", "zh", "zu", "zza"], "language_bcp47": ["zh-CN", "zh-HK", "zh-TW", "sv-SE", "rm-sursilv", "rm-vallader", "pa-IN", "nn-NO", "ne-NP", "nan-tw", "hy-AM", "ga-IE", "fy-NL"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["extended|common_voice"], "paperswithcode_id": "common-voice", "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."} | false | null | 2024-06-16T13:50:23 | 261 | 12 | false | b10d53980ef166bc24ce3358471c1970d7e6b5ec |
Dataset Card for Common Voice Corpus 17.0
Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 31175 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 20408 validated hours in 124 languages, but more voices and languages are always added.
Take a look at the Languages page to… See the full description on the dataset page: https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0. | 40,908 | 469,635 | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"language:ab",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:ba",
"language:bas",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ckb",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dyu",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gn",
"language:ha",
"language:he",
"language:hi",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:kk",
"language:kmr",
"language:ko",
"language:ky",
"language:lg",
"language:lij",
"language:lo",
"language:lt",
"language:ltg",
"language:lv",
"language:mdf",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:mt",
"language:myv",
"language:nan",
"language:ne",
"language:nhi",
"language:nl",
"language:nn",
"language:nso",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:quy",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:sk",
"language:skr",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tok",
"language:tr",
"language:tt",
"language:tw",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vot",
"language:yi",
"language:yo",
"language:yue",
"language:zgh",
"language:zh",
"language:zu",
"language:zza",
"license:cc0-1.0",
"size_categories:10M<n<100M",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1912.06670",
"region:us"
] | 2024-04-04T10:06:19 | common-voice | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} |
67efae8ed3b5fdf4e5d9c56a | davanstrien/reasoning-required | davanstrien | {"language": "en", "license": "mit", "tags": ["curator", "reasoning-datasets-competition", "reasoning"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "Reasoning Required", "size_categories": ["1K<n<10K"]} | false | null | 2025-04-10T10:13:25 | 12 | 12 | false | ca33daa54eb69f8f92d4de44a02bc3b9a4d31034 |
Dataset Card for the Reasoning Required Dataset
2025 has seen a massive growing interest in reasoning datasets. Currently, the majority of these datasets are focused on coding and math problems. This dataset – and the associated models – aim to make it easier to create reasoning datasets for a wider variety of domains. This is achieved by making it more feasible to leverage text "in the wild" and use a small encoder-only model to classify the level of reasoning complexity… See the full description on the dataset page: https://huggingface.co/datasets/davanstrien/reasoning-required. | 254 | 273 | [
"task_categories:text-classification",
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13124",
"region:us",
"curator",
"reasoning-datasets-competition",
"reasoning"
] | 2025-04-04T10:03:58 | null | null |
67a89e79556fa47a174b6c7b | agentica-org/DeepScaleR-Preview-Dataset | agentica-org | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"]} | false | null | 2025-02-10T09:51:18 | 103 | 11 | false | b6ae8c60f5c1f2b594e2140b91c49c9ad0949e29 |
Data
Our training dataset consists of approximately 40,000 unique mathematics problem-answer pairs compiled from:
AIME (American Invitational Mathematics Examination) problems (1984-2023)
AMC (American Mathematics Competition) problems (prior to 2023)
Omni-MATH dataset
Still dataset
Format
Each row in the JSON dataset contains:
problem: The mathematical question text, formatted with LaTeX notation.
solution: Offical solution to the problem, including LaTeX formatting… See the full description on the dataset page: https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset. | 3,600 | 7,435 | [
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-09T12:24:25 | null | null |
6791fcbb49c4df6d798ca7c9 | cais/hle | cais | {"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "image_preview", "dtype": "image"}, {"name": "answer", "dtype": "string"}, {"name": "answer_type", "dtype": "string"}, {"name": "author_name", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "rationale_image", "dtype": "image"}, {"name": "raw_subject", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "canary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 284635618, "num_examples": 2500}], "download_size": 274582371, "dataset_size": 284635618}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | false | null | 2025-04-04T04:00:14 | 303 | 10 | false | 1e33bd2d1346480b397ad94845067c4a088a33d3 |
Humanity's Last Exam
🌐 Website | 📄 Paper | GitHub
Center for AI Safety & Scale AI
Humanity's Last Exam (HLE) is a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. Humanity's Last Exam consists of 2,500 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of… See the full description on the dataset page: https://huggingface.co/datasets/cais/hle. | 7,899 | 19,532 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-23T08:24:27 | null | null |
67c03fd6b9fe27a2ac49784d | open-r1/codeforces-cots | open-r1 | {"dataset_info": [{"config_name": "checker_interactor", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 994149425, "num_examples": 35718}], "download_size": 274975300, "dataset_size": 994149425}, {"config_name": "solutions", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "note", "dtype": "string"}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "interaction_format", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4968074271, "num_examples": 47780}], "download_size": 1887049179, "dataset_size": 4968074271}, {"config_name": "solutions_decontaminated", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "note", "dtype": "string"}, {"name": "editorial", "dtype": "string"}, {"name": "problem", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "interaction_format", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "problem_type", "dtype": "string"}, {"name": "public_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "private_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "generated_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "public_tests_ms", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "failed_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "accepted_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "passed_test_count", "dtype": "null"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "programming_language", "dtype": "string"}, {"name": "submission_id", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 6719356671, "num_examples": 40665}], "download_size": 2023394671, "dataset_size": 6719356671}, {"config_name": "solutions_py", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1000253222, "num_examples": 9556}], "download_size": 411697337, "dataset_size": 1000253222}, {"config_name": "solutions_py_decontaminated", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "accepted_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "passed_test_count", "dtype": "null"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "programming_language", "dtype": "string"}, {"name": "submission_id", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "failed_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "generated_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "private_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "problem_type", "dtype": "string"}, {"name": "public_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "public_tests_ms", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1349328880, "num_examples": 8133}], "download_size": 500182086, "dataset_size": 1349328880}, {"config_name": "solutions_short_and_long_decontaminated", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "note", "dtype": "string"}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "interaction_format", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "accepted_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "passed_test_count", "dtype": "null"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "programming_language", "dtype": "string"}, {"name": "submission_id", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "failed_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "generated_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "private_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "problem_type", "dtype": "string"}, {"name": "public_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "public_tests_ms", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2699204607, "num_examples": 16266}], "download_size": 1002365269, "dataset_size": 2699204607}, {"config_name": "solutions_w_editorials", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2649620432, "num_examples": 29180}], "download_size": 972089090, "dataset_size": 2649620432}, {"config_name": "solutions_w_editorials_decontaminated", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "accepted_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "passed_test_count", "dtype": "null"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "programming_language", "dtype": "string"}, {"name": "submission_id", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "failed_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "generated_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "private_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "problem_type", "dtype": "string"}, {"name": "public_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "public_tests_ms", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3738669884, "num_examples": 24490}], "download_size": 1012247387, "dataset_size": 3738669884}, {"config_name": "solutions_w_editorials_py", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1067124847, "num_examples": 11672}], "download_size": 415023817, "dataset_size": 1067124847}, {"config_name": "solutions_w_editorials_py_decontaminated", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "interaction_format", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "accepted_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "passed_test_count", "dtype": "null"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "programming_language", "dtype": "string"}, {"name": "submission_id", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "failed_solutions", "list": [{"name": "code", "dtype": "string"}, {"name": "passedTestCount", "dtype": "int64"}, {"name": "programmingLanguage", "dtype": "string"}, {"name": "verdict", "dtype": "string"}]}, {"name": "generated_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "private_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "problem_type", "dtype": "string"}, {"name": "public_tests", "struct": [{"name": "input", "sequence": "string"}, {"name": "output", "sequence": "string"}]}, {"name": "public_tests_ms", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1499075280, "num_examples": 9796}], "download_size": 466078291, "dataset_size": 1499075280}, {"config_name": "test_input_generator", "features": [{"name": "id", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "contest_id", "dtype": "string"}, {"name": "contest_name", "dtype": "string"}, {"name": "contest_type", "dtype": "string"}, {"name": "contest_start", "dtype": "int64"}, {"name": "contest_start_year", "dtype": "int64"}, {"name": "index", "dtype": "string"}, {"name": "time_limit", "dtype": "float64"}, {"name": "memory_limit", "dtype": "float64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "input_format", "dtype": "string"}, {"name": "output_format", "dtype": "string"}, {"name": "examples", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "note", "dtype": "string"}, {"name": "editorial", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "generation", "dtype": "string"}, {"name": "finish_reason", "dtype": "string"}, {"name": "api_metadata", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "completion_tokens_details", "dtype": "null"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "prompt_tokens_details", "dtype": "null"}, {"name": "total_tokens", "dtype": "int64"}]}, {"name": "interaction_format", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1851104290, "num_examples": 20620}], "download_size": 724157877, "dataset_size": 1851104290}], "configs": [{"config_name": "checker_interactor", "data_files": [{"split": "train", "path": "checker_interactor/train-*"}]}, {"config_name": "solutions", "default": true, "data_files": [{"split": "train", "path": "solutions/train-*"}]}, {"config_name": "solutions_decontaminated", "data_files": [{"split": "train", "path": "solutions_decontaminated/train-*"}]}, {"config_name": "solutions_py", "data_files": [{"split": "train", "path": "solutions_py/train-*"}]}, {"config_name": "solutions_py_decontaminated", "data_files": [{"split": "train", "path": "solutions_py_decontaminated/train-*"}]}, {"config_name": "solutions_short_and_long_decontaminated", "data_files": [{"split": "train", "path": "solutions_short_and_long_decontaminated/train-*"}]}, {"config_name": "solutions_w_editorials", "data_files": [{"split": "train", "path": "solutions_w_editorials/train-*"}]}, {"config_name": "solutions_w_editorials_decontaminated", "data_files": [{"split": "train", "path": "solutions_w_editorials_decontaminated/train-*"}]}, {"config_name": "solutions_w_editorials_py", "data_files": [{"split": "train", "path": "solutions_w_editorials_py/train-*"}]}, {"config_name": "solutions_w_editorials_py_decontaminated", "data_files": [{"split": "train", "path": "solutions_w_editorials_py_decontaminated/train-*"}]}, {"config_name": "test_input_generator", "data_files": [{"split": "train", "path": "test_input_generator/train-*"}]}], "license": "cc-by-4.0"} | false | null | 2025-03-28T12:21:06 | 139 | 10 | false | 39ac85c150806230473c70ad72c31f6232fe3f41 |
Dataset Card for CodeForces-CoTs
Dataset description
CodeForces-CoTs is a large-scale dataset for training reasoning models on competitive programming tasks. It consists of 10k CodeForces problems with up to five reasoning traces generated by DeepSeek R1. We did not filter the traces for correctness, but found that around 84% of the Python ones pass the public tests.
The dataset consists of several subsets:
solutions: we prompt R1 to solve the problem and produce code.… See the full description on the dataset page: https://huggingface.co/datasets/open-r1/codeforces-cots. | 12,548 | 13,879 | [
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-27T10:35:02 | null | null |
67e90b135e63bac35a2dbaf0 | MohamedRashad/Quran-Recitations | MohamedRashad | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 49579449331.918, "num_examples": 124689}], "download_size": 33136131149, "dataset_size": 49579449331.918}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "language": ["ar"], "size_categories": ["100K<n<1M"]} | false | null | 2025-03-30T11:19:54 | 38 | 10 | false | 65ee6114d526c02f7f96d696bb254a2dd666270c |
Quran-Recitations Dataset
Overview
The Quran-Recitations dataset is a rich and reverent collection of Quranic verses, meticulously paired with their respective recitations by esteemed Qaris. This dataset serves as a valuable resource for researchers, developers, and students interested in Quranic studies, speech recognition, audio analysis, and Islamic applications.
Dataset Structure
source: The name of the Qari (reciter) who performed… See the full description on the dataset page: https://huggingface.co/datasets/MohamedRashad/Quran-Recitations. | 1,282 | 1,282 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language:ar",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-03-30T09:12:51 | null | null |
67f332c1cef233be93ec1e05 | SparkAudio/voxbox | SparkAudio | {"license": "cc-by-nc-sa-4.0", "language": ["zh", "en"], "tags": ["speech", "audio"], "pretty_name": "voxbox", "size_categories": ["10M<n<100M"], "task_categories": ["text-to-speech"]} | false | null | 2025-04-11T05:04:07 | 10 | 10 | false | e746936c2be2ba1af85f59a1ecdb5d563a77ca3e |
VoxBox
This dataset is a curated collection of bilingual speech corpora annotated clean transcriptions and rich metadata incluing age, gender, and emotion.
Dataset Structure
.
├── audios/
│ └── aishell-3/ # Audio files (organised by sub-corpus)
│ └── ...
└── metadata/
├── aishell-3.jsonl
├── casia.jsonl
├── commonvoice_cn.jsonl
├── ...
└── wenetspeech4tts.jsonl # JSONL metadata files
Each JSONL file corresponds to a… See the full description on the dataset page: https://huggingface.co/datasets/SparkAudio/voxbox. | 996 | 996 | [
"task_categories:text-to-speech",
"language:zh",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:audio",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2503.01710",
"region:us",
"speech",
"audio"
] | 2025-04-07T02:04:49 | null | null |
67b20fc10861cec33b3afb8a | Conard/fortune-telling | Conard | {"license": "mit"} | false | null | 2025-02-17T05:13:43 | 119 | 9 | false | 6261fe0d35a75997972bbfcd9828020e340303fb | null | 4,949 | 8,463 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-16T16:18:09 | null | null |
67b58abdbc707d7ed36e6750 | KRX-Data/Won-Instruct | KRX-Data | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "original_response", "dtype": "string"}, {"name": "Qwen/Qwen2.5-1.5B-Instruct_response", "dtype": "string"}, {"name": "Qwen/Qwen2.5-7B-Instruct_response", "dtype": "string"}, {"name": "google/gemma-2-2b-it_response", "dtype": "string"}, {"name": "google/gemma-2-9b-it_response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 846093226, "num_examples": 86007}], "download_size": 375880264, "dataset_size": 846093226}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-04-11T05:03:20 | 10 | 9 | false | 9ff85bc243b7e1aa30970ef63da0bbfaaeb371e8 | 🇺🇸 English | 🇰🇷 한국어
Introduction
The ₩ON-Instruct is a comprehensive instruction-following dataset tailored for training Korean language models specialized in financial reasoning and domain-specific financial tasks.
This dataset was meticulously assembled through rigorous filtering and quality assurance processes, aiming to enhance the reasoning abilities of large language models (LLMs) within the financial domain, specifically tuned for Korean financial tasks.
The dataset… See the full description on the dataset page: https://huggingface.co/datasets/KRX-Data/Won-Instruct. | 9 | 69 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2503.17963",
"region:us"
] | 2025-02-19T07:39:41 | null | null |
67cd6c25b770987b3f80af97 | a-m-team/AM-DeepSeek-R1-Distilled-1.4M | a-m-team | {"license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "language": ["zh", "en"], "tags": ["code", "math", "reasoning", "thinking", "deepseek-r1", "distill"], "size_categories": ["1M<n<10M"], "configs": [{"config_name": "am_0.5M", "data_files": "am_0.5M.jsonl.zst", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "info", "struct": [{"name": "answer_content", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "test_case", "struct": [{"name": "test_code", "dtype": "string"}, {"name": "test_entry_point", "dtype": "string"}]}, {"name": "think_content", "dtype": "string"}]}, {"name": "role", "dtype": "string"}]}]}, {"config_name": "am_0.9M", "data_files": "am_0.9M.jsonl.zst", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "info", "struct": [{"name": "answer_content", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "test_case", "struct": [{"name": "test_code", "dtype": "string"}, {"name": "test_entry_point", "dtype": "string"}]}, {"name": "think_content", "dtype": "string"}]}, {"name": "role", "dtype": "string"}]}]}, {"config_name": "am_0.9M_sample_1k", "data_files": "am_0.9M_sample_1k.jsonl", "features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "info", "struct": [{"name": "answer_content", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "test_case", "struct": [{"name": "test_code", "dtype": "string"}, {"name": "test_entry_point", "dtype": "string"}]}, {"name": "think_content", "dtype": "string"}]}, {"name": "role", "dtype": "string"}]}]}]} | false | null | 2025-03-30T01:30:08 | 117 | 9 | false | 53531c06634904118a2dcd83961918c4d69d1cdf | For more open-source datasets, models, and methodologies, please visit our GitHub repository.
AM-DeepSeek-R1-Distilled-1.4M is a large-scale general reasoning task dataset composed of
high-quality and challenging reasoning problems. These problems are collected from numerous
open-source datasets, semantically deduplicated, and cleaned to eliminate test set contamination.
All responses in the dataset are distilled from the reasoning model (mostly DeepSeek-R1) and have undergone
rigorous… See the full description on the dataset page: https://huggingface.co/datasets/a-m-team/AM-DeepSeek-R1-Distilled-1.4M. | 11,894 | 12,320 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"arxiv:2503.19633",
"region:us",
"code",
"math",
"reasoning",
"thinking",
"deepseek-r1",
"distill"
] | 2025-03-09T10:23:33 | null | null |
6532270e829e1dc2f293d6b8 | gaia-benchmark/GAIA | gaia-benchmark | {"language": ["en"], "pretty_name": "General AI Assistants Benchmark", "extra_gated_prompt": "To avoid contamination and data leakage, you agree to not reshare this dataset outside of a gated or private repository on the HF hub.", "extra_gated_fields": {"I agree to not reshare the GAIA submissions set according to the above conditions": "checkbox"}} | false | null | 2025-02-13T08:36:12 | 292 | 8 | false | 897f2dfbb5c952b5c3c1509e648381f9c7b70316 |
GAIA dataset
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc).
We added gating to prevent bots from scraping the dataset. Please do not reshare the validation or test set in a crawlable format.
Data and leaderboard
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve. It… See the full description on the dataset page: https://huggingface.co/datasets/gaia-benchmark/GAIA. | 10,622 | 41,576 | [
"language:en",
"arxiv:2311.12983",
"region:us"
] | 2023-10-20T07:06:54 | null | |
6797e648de960c48ff034e54 | open-thoughts/OpenThoughts-114k | open-thoughts | {"dataset_info": [{"config_name": "default", "features": [{"name": "system", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2635015668, "num_examples": 113957}], "download_size": 1078777193, "dataset_size": 2635015668}, {"config_name": "metadata", "features": [{"name": "problem", "dtype": "string"}, {"name": "deepseek_reasoning", "dtype": "string"}, {"name": "deepseek_solution", "dtype": "string"}, {"name": "ground_truth_solution", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "test_cases", "dtype": "string"}, {"name": "starter_code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5525214077.699433, "num_examples": 113957}], "download_size": 2469729724, "dataset_size": 5525214077.699433}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "metadata", "data_files": [{"split": "train", "path": "metadata/train-*"}]}], "tags": ["curator", "synthetic"], "license": "apache-2.0"} | false | null | 2025-04-06T23:31:24 | 688 | 8 | false | a5996b0064b4ddd42c6e9a7302eeec0618cb7b63 |
Open-Thoughts-114k
Open synthetic reasoning dataset with 114k high-quality examples covering math, science, code, and puzzles!
Inspect the content with rich formatting with Curator Viewer.
Available Subsets
default subset containing ready-to-train data used to finetune the OpenThinker-7B and OpenThinker-32B models:
ds = load_dataset("open-thoughts/OpenThoughts-114k", split="train")
metadata subset containing extra columns used in dataset construction:… See the full description on the dataset page: https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k. | 29,582 | 163,025 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"curator",
"synthetic"
] | 2025-01-27T20:02:16 | null | null |
67a2bed1fab04a7b413c8ef1 | PrimeIntellect/verifiable-coding-problems | PrimeIntellect | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "task_type", "dtype": "string"}, {"name": "in_source_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "gold_standard_solution", "dtype": "string"}, {"name": "verification_info", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "problem_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21575365821, "num_examples": 144169}], "download_size": 10811965671, "dataset_size": 21575365821}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-02-06T21:49:12 | 28 | 8 | false | 45220c92768b1e401aadffbf26849b8d6cf39a36 |
SYNTHETIC-1
This is a subset of the task data used to construct SYNTHETIC-1. You can find the full collection here
| 1,383 | 4,011 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-05T01:28:49 | null | null |
67a53267784a1ad88b781d7f | CohereLabs/kaleidoscope | CohereLabs | {"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "file_name", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "category_en", "dtype": "string"}, {"name": "category_original_lang", "dtype": "string"}, {"name": "original_question_num", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "image_png", "dtype": "string"}, {"name": "image_information", "dtype": "string"}, {"name": "image_type", "dtype": "string"}, {"name": "parallel_question_id", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "general_category_en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15519985, "num_examples": 20911}], "download_size": 4835304, "dataset_size": 15519985}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "language": ["ar", "bn", "hr", "nl", "en", "fr", "de", "hi", "hu", "lt", "ne", "fa", "pt", "ru", "sr", "es", "te", "uk"], "modality": ["text", "image"]} | false | null | 2025-04-10T12:17:21 | 8 | 8 | false | 6b9de3ab925e3e8540a1929337e62c44c4febe1b |
Kaleidoscope (18 Languages)
Dataset Description
The Kaleidoscope Benchmark is a
global collection of multiple-choice questions sourced from real-world exams,
with the goal of evaluating multimodal and multilingual understanding in VLMs.
The collected exams are in a Multiple-choice question answering (MCQA)
format which provides a structured framework for evaluation by prompting
models with predefined answer choices, closely mimicking conventional human testing… See the full description on the dataset page: https://huggingface.co/datasets/CohereLabs/kaleidoscope. | 50 | 156 | [
"language:ar",
"language:bn",
"language:hr",
"language:nl",
"language:en",
"language:fr",
"language:de",
"language:hi",
"language:hu",
"language:lt",
"language:ne",
"language:fa",
"language:pt",
"language:ru",
"language:sr",
"language:es",
"language:te",
"language:uk",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07072",
"region:us"
] | 2025-02-06T22:06:31 | null | null |
67aa648e91e6f5eb545e854e | allenai/olmOCR-mix-0225 | allenai | {"license": "odc-by", "configs": [{"config_name": "00_documents", "data_files": [{"split": "train_s2pdf", "path": ["train-s2pdf.parquet"]}, {"split": "eval_s2pdf", "path": ["eval-s2pdf.parquet"]}]}, {"config_name": "01_books", "data_files": [{"split": "train_iabooks", "path": ["train-iabooks.parquet"]}, {"split": "eval_iabooks", "path": ["eval-iabooks.parquet"]}]}]} | false | null | 2025-02-25T09:36:14 | 117 | 8 | false | a602926844ed47c43439627fd16d3de45b39e494 |
olmOCR-mix-0225
olmOCR-mix-0225 is a dataset of ~250,000 PDF pages which have been OCRed into plain-text in a natural reading order using gpt-4o-2024-08-06 and a special
prompting strategy that preserves any born-digital content from each page.
This dataset can be used to train, fine-tune, or evaluate your own OCR document pipeline.
Quick links:
📃 Paper
🤗 Model
🛠️ Code
🎮 Demo
Data Mix
Table 1: Training set composition by source
Source
Unique… See the full description on the dataset page: https://huggingface.co/datasets/allenai/olmOCR-mix-0225. | 2,814 | 7,263 | [
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-02-10T20:41:50 | null | null |
67ea8831615fb44c0f3b62a4 | ByteDance-Seed/Multi-SWE-bench | ByteDance-Seed | {"license": "other", "task_categories": ["text-generation"], "tags": ["code"]} | false | null | 2025-04-13T02:55:31 | 16 | 8 | false | 68e134be1721821bd4f380d0ed3c14c34fc770cb |
👋 Overview
This repository contains the Multi-SWE-bench dataset, introduced in Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving, to address the lack of multilingual benchmarks for evaluating LLMs in real-world code issue resolution.
Unlike existing Python-centric benchmarks (e.g., SWE-bench), this framework spans 7 languages (Java, TypeScript, JavaScript, Go, Rust, C, and C++) with 1,632 high-quality instances,
curated from 2,456 candidates by 68 expert annotators… See the full description on the dataset page: https://huggingface.co/datasets/ByteDance-Seed/Multi-SWE-bench. | 701 | 701 | [
"task_categories:text-generation",
"license:other",
"arxiv:2504.02605",
"region:us",
"code"
] | 2025-03-31T12:18:57 | null | null |
67ed3a6474b2ca50ce15839c | Rapidata/text-2-video-human-preferences-pika2.2 | Rapidata | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "video1", "dtype": "string"}, {"name": "video2", "dtype": "string"}, {"name": "weighted_results1_Alignment", "dtype": "float64"}, {"name": "weighted_results2_Alignment", "dtype": "float64"}, {"name": "detailedResults_Alignment", "dtype": "string"}, {"name": "weighted_results1_Coherence", "dtype": "float64"}, {"name": "weighted_results2_Coherence", "dtype": "float64"}, {"name": "detailedResults_Coherence", "dtype": "string"}, {"name": "weighted_results1_Preference", "dtype": "float64"}, {"name": "weighted_results2_Preference", "dtype": "float64"}, {"name": "detailedResults_Preference", "dtype": "string"}, {"name": "file_name1", "dtype": "string"}, {"name": "file_name2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14265505, "num_examples": 1732}], "download_size": 1930994, "dataset_size": 14265505}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "apache-2.0", "task_categories": ["video-classification", "text-to-video", "text-classification"], "language": ["en"], "tags": ["videos", "t2v", "text-2-video", "text2video", "text-to-video", "human", "annotations", "preferences", "likert", "coherence", "alignment", "wan", "wan 2.1", "veo2", "veo", "pikka", "alpha", "sora", "hunyuan"], "pretty_name": "Pika 2.2 Human Preferences", "size_categories": ["1K<n<10K"]} | false | null | 2025-04-08T12:00:02 | 8 | 8 | false | c4a85460413a0d99ce9b481cf4e68bbabbcb7a30 |
Rapidata Video Generation Pika 2.2 Human Preference
In this dataset, ~756k human responses from ~29k human annotators were collected to evaluate Pika 2.2 video generation model on our benchmark. This dataset was collected in ~1 day total using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation.
Explore our latest model rankings on our website.
If you get value from this dataset and would like to see more in the future, please consider… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences-pika2.2. | 101 | 101 | [
"task_categories:video-classification",
"task_categories:text-to-video",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"videos",
"t2v",
"text-2-video",
"text2video",
"text-to-video",
"human",
"annotations",
"preferences",
"likert",
"coherence",
"alignment",
"wan",
"wan 2.1",
"veo2",
"veo",
"pikka",
"alpha",
"sora",
"hunyuan"
] | 2025-04-02T13:23:48 | null | null |
661e02bd3f198d4337848286 | livecodebench/code_generation_lite | livecodebench | {"license": "cc", "tags": ["code", "code generation"], "pretty_name": "LiveCodeBench", "size_categories": ["n<1K"]} | false | null | 2025-01-14T18:03:07 | 34 | 7 | false | 0687ab61843a90a0cc864a2b67db729861cd0ae5 | LiveCodeBench is a temporaly updating benchmark for code generation. Please check the homepage: https://livecodebench.github.io/. | 51,218 | 156,734 | [
"license:cc",
"size_categories:n<1K",
"arxiv:2403.07974",
"region:us",
"code",
"code generation"
] | 2024-04-16T04:46:53 | null | @article{jain2024livecodebench,
title={LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code},
author={Jain, Naman and Han, King and Gu, Alex and Li, Wen-Ding and Yan, Fanjia and Zhang, Tianjun and Wang, Sida and Solar-Lezama, Armando and Sen, Koushik and Stoica, Ion},
journal={arXiv preprint arXiv:2403.07974},
year={2024}
} |
667ee649a7d8b1deba8d4f4c | proj-persona/PersonaHub | proj-persona | {"license": "cc-by-nc-sa-4.0", "task_categories": ["text-generation", "text-classification", "token-classification", "fill-mask", "table-question-answering", "text2text-generation"], "language": ["en", "zh"], "tags": ["synthetic", "text", "math", "reasoning", "instruction", "tool"], "size_categories": ["100M<n<1B"], "configs": [{"config_name": "math", "data_files": "math.jsonl"}, {"config_name": "instruction", "data_files": "instruction.jsonl"}, {"config_name": "reasoning", "data_files": "reasoning.jsonl"}, {"config_name": "knowledge", "data_files": "knowledge.jsonl"}, {"config_name": "npc", "data_files": "npc.jsonl"}, {"config_name": "tool", "data_files": "tool.jsonl"}, {"config_name": "persona", "data_files": "persona.jsonl"}, {"config_name": "elite_persona", "data_files": [{"split": "train", "path": "ElitePersonas/*"}]}]} | false | null | 2025-03-04T22:01:42 | 557 | 7 | false | 600b0189027c804fc9373b4de4875c171656a4df |
Scaling Synthetic Data Creation with 1,000,000,000 Personas
This repo releases data introduced in our paper Scaling Synthetic Data Creation with 1,000,000,000 Personas:
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce PERSONA HUB – a collection of 1 billion diverse personas automatically curated from web data.… See the full description on the dataset page: https://huggingface.co/datasets/proj-persona/PersonaHub. | 5,310 | 46,320 | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:fill-mask",
"task_categories:table-question-answering",
"task_categories:text2text-generation",
"language:en",
"language:zh",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.20094",
"region:us",
"synthetic",
"text",
"math",
"reasoning",
"instruction",
"tool"
] | 2024-06-28T16:35:21 | null | null |
66a520e6387f62525b93f1bb | weaverbirdllm/famma | weaverbirdllm | {"language": ["en", "zh", "fr"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "multiple-choice"], "pretty_name": "FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering", "tags": ["finance"], "dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "question_id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "image_1", "dtype": "image"}, {"name": "image_2", "dtype": "image"}, {"name": "image_3", "dtype": "image"}, {"name": "image_4", "dtype": "image"}, {"name": "image_5", "dtype": "image"}, {"name": "image_6", "dtype": "image"}, {"name": "image_7", "dtype": "image"}, {"name": "image_type", "dtype": "string"}, {"name": "answers", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "topic_difficulty", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "subfield", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "main_question_id", "dtype": "string"}, {"name": "sub_question_id", "dtype": "string"}, {"name": "is_arithmetic", "dtype": "int32"}, {"name": "ans_image_1", "dtype": "image"}, {"name": "ans_image_2", "dtype": "image"}, {"name": "ans_image_3", "dtype": "image"}, {"name": "ans_image_4", "dtype": "image"}, {"name": "ans_image_5", "dtype": "image"}, {"name": "ans_image_6", "dtype": "image"}, {"name": "release", "dtype": "string"}], "splits": [{"name": "release_basic", "num_bytes": 113235537.37, "num_examples": 1945}, {"name": "release_livepro", "num_bytes": 3265950, "num_examples": 103}, {"name": "release_basic_txt", "num_bytes": 1966706.375, "num_examples": 1945}, {"name": "release_livepro_txt", "num_bytes": 58596, "num_examples": 103}], "download_size": 94724026, "dataset_size": 118526789.745}, "configs": [{"config_name": "default", "data_files": [{"split": "release_basic", "path": "data/release_basic-*"}, {"split": "release_livepro", "path": "data/release_livepro-*"}, {"split": "release_basic_txt", "path": "data/release_basic_txt-*"}, {"split": "release_livepro_txt", "path": "data/release_livepro_txt-*"}]}]} | false | null | 2025-04-08T09:04:46 | 13 | 7 | false | a40b9ae8dd9545a82b2e901a0d20d3bd758455c2 |
Introduction
FAMMA is a multi-modal financial Q&A benchmark dataset. The questions encompass three heterogeneous image types - tables, charts and text & math screenshots - and span eight subfields in finance, comprehensively covering topics across major asset classes. Additionally, all the questions are categorized by three difficulty levels — easy, medium, and hard - and are available in three languages — English, Chinese, and French. Furthermore, the questions are divided into two… See the full description on the dataset page: https://huggingface.co/datasets/weaverbirdllm/famma. | 219 | 1,557 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"language:en",
"language:zh",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.04526",
"region:us",
"finance"
] | 2024-07-27T16:31:34 | null | null |
67ae5cb70100bb7fb11fdb31 | getomni-ai/ocr-benchmark | getomni-ai | {"license": "mit", "size_categories": ["1K<n<10K"]} | false | null | 2025-02-21T06:34:31 | 49 | 7 | false | 4ed0d95271ca00107726230f7a0944ed9e90d897 |
OmniAI OCR Benchmark
A comprehensive benchmark that compares OCR and data extraction capabilities of different multimodal LLMs such as gpt-4o and gemini-2.0, evaluating both text and JSON extraction accuracy.
Benchmark Results (Feb 2025) | Source Code
| 3,590 | 5,204 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | 2025-02-13T20:57:27 | null | null |
67d6f73ef789a7b68967193d | starriver030515/FUSION-Finetune-12M | starriver030515 | {"license": "apache-2.0", "task_categories": ["question-answering", "visual-question-answering", "table-question-answering"], "language": ["en", "zh"], "configs": [{"config_name": "ALLaVA", "data_files": [{"split": "train", "path": "examples/ALLaVA*"}]}, {"config_name": "ArxivQA", "data_files": [{"split": "train", "path": "examples/ArxivQA*"}]}, {"config_name": "CLEVR", "data_files": [{"split": "train", "path": "examples/CLEVR*"}]}, {"config_name": "ChartQA", "data_files": [{"split": "train", "path": "examples/ChartQA*"}]}, {"config_name": "DVQA", "data_files": [{"split": "train", "path": "examples/DVQA*"}]}, {"config_name": "DataEngine", "data_files": [{"split": "train", "path": "examples/DataEngine*"}]}, {"config_name": "DocMatix", "data_files": [{"split": "train", "path": "examples/DocMatix*"}]}, {"config_name": "GeoQA", "data_files": [{"split": "train", "path": "examples/GeoQA*"}]}, {"config_name": "LNQA", "data_files": [{"split": "train", "path": "examples/LNQA*"}]}, {"config_name": "LVISInstruct", "data_files": [{"split": "train", "path": "examples/LVISInstruct*"}]}, {"config_name": "MMathCoT", "data_files": [{"split": "train", "path": "examples/MMathCoT*"}]}, {"config_name": "MathVision", "data_files": [{"split": "train", "path": "examples/MathVision*"}]}, {"config_name": "MulBerry", "data_files": [{"split": "train", "path": "examples/MulBerry*"}]}, {"config_name": "PixmoAskModelAnything", "data_files": [{"split": "train", "path": "examples/PixmoAskModelAnything*"}]}, {"config_name": "PixmoCap", "data_files": [{"split": "train", "path": "examples/PixmoCap*"}]}, {"config_name": "PixmoCapQA", "data_files": [{"split": "train", "path": "examples/PixmoCapQA*"}]}, {"config_name": "PixmoDocChart", "data_files": [{"split": "train", "path": "examples/PixmoDocChart*"}]}, {"config_name": "PixmoDocDiagram", "data_files": [{"split": "train", "path": "examples/PixmoDocDiagram*"}]}, {"config_name": "PixmoDocTable", "data_files": [{"split": "train", "path": "examples/PixmoDocTable*"}]}, {"config_name": "SynthChoice", "data_files": [{"split": "train", "path": "examples/SynthChoice*"}]}, {"config_name": "SynthConvLong", "data_files": [{"split": "train", "path": "examples/SynthConvLong*"}]}, {"config_name": "SynthConvShort", "data_files": [{"split": "train", "path": "examples/SynthConvShort*"}]}, {"config_name": "SynthContrastLong", "data_files": [{"split": "train", "path": "examples/SynthContrastLong*"}]}, {"config_name": "SynthContrastShort", "data_files": [{"split": "train", "path": "examples/SynthContrastShort*"}]}, {"config_name": "SynthReasoning", "data_files": [{"split": "train", "path": "examples/SynthReasoning*"}]}, {"config_name": "SynthTextQA", "data_files": [{"split": "train", "path": "examples/SynthTextQA*"}]}, {"config_name": "SynthDog", "data_files": [{"split": "train", "path": "examples/SynthDog*"}]}], "dataset_info": [{"config_name": "ALLaVA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "ArxivQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "CLEVR", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "ChartQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "DVQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "DataEngine", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "GeoQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "LNQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "LVISInstruct", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "DocMatix", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "MMathCoT", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "MathVision", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "MulBerry", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "PixmoAskModelAnything", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "PixmoCap", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "PixmoCapQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "PixmoDocChart", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "PixmoDocDiagram", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "PixmoDocTable", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthChoice", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthConvLong", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthConvShort", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthContrastLong", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthContrastShort", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthReasoning", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthTextQA", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}, {"config_name": "SynthDog", "features": [{"name": "id", "dtype": "string"}, {"name": "QA", "dtype": "string"}, {"name": "image", "dtype": "image"}]}], "size_categories": ["10M<n<100M"]} | false | null | 2025-04-12T06:43:43 | 9 | 7 | false | 5e9ace80ee08f925bc979391b8493004eca45edb |
FUSION-12M Dataset
Please see paper & website for more information:
comming soon~
comming soon~
Overview
FUSION-12M is a large-scale, diverse multimodal instruction-tuning dataset used to train FUSION-3B and FUSION-8B models. It builds upon Cambrian-1 by significantly expanding both the quantity and variety of data, particularly in areas such as OCR, mathematical reasoning, and synthetic high-quality Q&A data. The goal is to provide a high-quality and high-volume… See the full description on the dataset page: https://huggingface.co/datasets/starriver030515/FUSION-Finetune-12M. | 738 | 738 | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_categories:table-question-answering",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-03-16T16:07:26 | null | null |
67ea45bbcb39affecc10763e | virtuoussy/Multi-subject-RLVR | virtuoussy | {"license": "apache-2.0", "task_categories": ["question-answering"], "language": ["en"]} | false | null | 2025-04-02T10:29:40 | 51 | 7 | false | 5be8ffa52bf3ccbfe0d4f601ddee1183cb1be0ab | Multi-subject data for paper "Expanding RL with Verifiable Rewards Across Diverse Domains".
we use a multi-subject multiple-choice QA dataset ExamQA (Yu et al., 2021).
Originally written in Chinese, ExamQA covers at least 48 first-level subjects.
We remove the distractors and convert each instance into a free-form QA pair.
This dataset consists of 638k college-level instances, with both questions and objective answers written by domain experts for examination purposes.
We also use GPT-4o-mini… See the full description on the dataset page: https://huggingface.co/datasets/virtuoussy/Multi-subject-RLVR. | 959 | 959 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2503.23829",
"region:us"
] | 2025-03-31T07:35:23 | null | null |
67f3e39c1ed031d0a1658cd5 | Rapidata/Reve-AI-Halfmoon_t2i_human_preference | Rapidata | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}, {"name": "weighted_results_image1_preference", "dtype": "float32"}, {"name": "weighted_results_image2_preference", "dtype": "float32"}, {"name": "detailed_results_preference", "dtype": "string"}, {"name": "weighted_results_image1_coherence", "dtype": "float32"}, {"name": "weighted_results_image2_coherence", "dtype": "float32"}, {"name": "detailed_results_coherence", "dtype": "string"}, {"name": "weighted_results_image1_alignment", "dtype": "float32"}, {"name": "weighted_results_image2_alignment", "dtype": "float32"}, {"name": "detailed_results_alignment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32462670063, "num_examples": 13000}], "download_size": 6565441182, "dataset_size": 32462670063}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "cdla-permissive-2.0", "task_categories": ["text-to-image", "image-to-text", "image-classification", "reinforcement-learning"], "language": ["en"], "tags": ["Human", "Preference", "Coherence", "Alignment", "country", "language", "flux", "midjourney", "dalle3", "stabeldiffusion", "alignment", "flux1.1", "flux1", "imagen3", "aurora", "lumina", "recraft", "recraft v2", "ideogram", "frames", "reve ai", "halfmoon"], "size_categories": ["100K<n<1M"], "pretty_name": "Halfmoon vs. OpenAI 4o / Ideogram V2 / Recraft V2 / Lumina-15-2-25 / Frames-23-1-25 / Aurora / imagen-3 / Flux-1.1-pro / Flux-1-pro / Dalle-3 / Midjourney-5.2 / Stabel-Diffusion-3 - Human Preference Dataset"} | false | null | 2025-04-08T11:55:08 | 7 | 7 | false | 5903def06796885ec2c1278abeebfa774f901c30 |
Rapidata Reve AI Halfmoon Preference
This T2I dataset contains over 195k human responses from over 51k individual annotators, collected in just ~1 Day using the Rapidata Python API, accessible to anyone and ideal for large scale evaluation.
Evaluating Reve AI Halfmoon across three categories: preference, coherence, and alignment.
Explore our latest model rankings on our website.
If you get value from this dataset and would like to see more in the future, please consider liking… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/Reve-AI-Halfmoon_t2i_human_preference. | 44 | 44 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-classification",
"task_categories:reinforcement-learning",
"language:en",
"license:cdla-permissive-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"Human",
"Preference",
"Coherence",
"Alignment",
"country",
"language",
"flux",
"midjourney",
"dalle3",
"stabeldiffusion",
"alignment",
"flux1.1",
"flux1",
"imagen3",
"aurora",
"lumina",
"recraft",
"recraft v2",
"ideogram",
"frames",
"reve ai",
"halfmoon"
] | 2025-04-07T14:39:24 | null | null |
End of preview. Expand
in Data Studio

NEW Changes Feb 27th
Added new fields on the
models
split:downloadsAllTime
,safetensors
,gguf
Added new field on the
datasets
split:downloadsAllTime
Added new split:
papers
which is all of the Daily Papers
Updated Daily
- Downloads last month
- 4,922
Data Sourcing report
powered
by
Spawning.aiNo elements in this dataset have been identified as either opted-out, or opted-in, by their creator.