Dataset Viewer
Auto-converted to Parquet Duplicate
_id
large_stringlengths
24
24
id
large_stringlengths
4
123
author
large_stringlengths
2
42
cardData
large_stringlengths
2
1.09M
disabled
bool
1 class
gated
large_stringclasses
3 values
lastModified
timestamp[us]date
2021-02-05 16:03:35
2026-03-15 13:14:34
likes
int64
0
9.62k
trendingScore
float64
0
83
private
bool
1 class
sha
large_stringlengths
40
40
description
large_stringlengths
0
6.67k
downloads
int64
0
2.31M
downloadsAllTime
int64
0
143M
tags
listlengths
1
7.92k
createdAt
timestamp[us]date
2022-03-02 23:29:22
2026-03-15 13:14:34
paperswithcode_id
large_stringclasses
692 values
citation
large_stringlengths
0
10.7k
698b2c8b4c9e577aa3b1fa16
nohurry/Opus-4.6-Reasoning-3000x-filtered
nohurry
{"license": "apache-2.0"}
false
False
2026-02-10T13:06:40
356
83
false
80e9226ea6168634ee2d6c010c3da619af8ad542
Filtered from: https://huggingface.co/datasets/crownelius/Opus-4.6-Reasoning-3000x The original dataset has 979 refusals, I removed these in this version.
4,856
4,917
[ "license:apache-2.0", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-10T13:03:07
null
null
69b50e502b0587383a0e526b
stepfun-ai/Step-3.5-Flash-SFT
stepfun-ai
{"license": ["apache-2.0", "cc-by-nc-2.0"], "pretty_name": "Step-3.5-Flash-SFT", "language": ["multilingual"], "task_categories": ["text-generation"], "tags": ["chat", "sft", "instruction-tuning", "reasoning", "code", "agent"]}
false
False
2026-03-14T14:22:37
73
73
false
c994154a801557540c56af623f31b58c4770c652
Step-3.5-Flash-SFT Step-3.5-Flash-SFT is a general-domain supervised fine-tuning release for chat models. This repository keeps the full training interface in one place: json/: canonical raw training data tokenizers/: tokenizer snapshots for Step-3.5-Flash and Qwen3, released to preserve chat-template alignment compiled/: tokenizer-specific compiled shards for StepTronOSS training Data Format Each raw shard is a JSON file whose top level is a list of examples. Each… See the full description on the dataset page: https://huggingface.co/datasets/stepfun-ai/Step-3.5-Flash-SFT.
420
420
[ "task_categories:text-generation", "language:multilingual", "license:apache-2.0", "license:cc-by-nc-2.0", "region:us", "chat", "sft", "instruction-tuning", "reasoning", "code", "agent" ]
2026-03-14T07:29:20
null
null
699250f08be5bf8321aeb29e
HuggingFaceFW/finephrase
HuggingFaceFW
{"language": ["en"], "license": "odc-by", "tags": ["SmolLM2-1.7B-Instruct", "fineweb-edu", "synthetic"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "pretty_name": "HuggingFaceFW/finephrase", "size_categories": ["n>1M"], "source_datasets": ["HuggingFaceFW/fineweb-edu/sample-350BT"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": ["faq/**/*.parquet", "math/**/*.parquet", "table/**/*.parquet", "tutorial/**/*.parquet"]}]}, {"config_name": "faq", "data_files": [{"split": "train", "path": "faq/**/*.parquet"}]}, {"config_name": "math", "data_files": [{"split": "train", "path": "math/**/*.parquet"}]}, {"config_name": "table", "data_files": [{"split": "train", "path": "table/**/*.parquet"}]}, {"config_name": "tutorial", "data_files": [{"split": "train", "path": "tutorial/**/*.parquet"}]}], "train-eval-index": [{"config": "all", "task": "text-generation", "task_id": "language-modeling", "splits": {"train_split": "train", "eval_split": null}, "col_mapping": {"text": "text"}}]}
false
False
2026-03-07T19:16:51
75
68
false
a9046961aa1360172836a82f63563db9b44993d3
Dataset Card for HuggingFaceFW/finephrase Dataset Summary Synthetic data generated by DataTrove: Model: HuggingFaceTB/SmolLM2-1.7B-Instruct (main) Source dataset: HuggingFaceFW/fineweb-edu, config sample-350BT, split train Generation config: temperature=1.0, top_p=1.0, top_k=50, max_tokens=2048, model_max_context=8192 Speculative decoding: {"method":"suffix","num_speculative_tokens":32} System prompt: None Input column: text Prompt families: faq prompt Rewrite the… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/finephrase.
78,527
78,527
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:machine-generated", "language_creators:found", "source_datasets:HuggingFaceFW/fineweb-edu/sample-350BT", "language:en", "license:odc-by", "size_categories:1B<n<10B", "modality:tabular", "modality:text", "regio...
2026-02-15T23:04:16
null
null
69b27063693ba5b211bd0a99
markov-ai/computer-use-large
markov-ai
{"license": "cc-by-4.0", "task_categories": ["video-classification", "robotics"], "language": ["en"], "tags": ["screen-recording", "computer-use", "software-tutorials", "gui", "desktop"], "size_categories": ["10K<n<100K"], "configs": [{"config_name": "autocad", "data_files": [{"split": "train", "path": ["data/autocad/*", "data/autocad_2/*"]}]}, {"config_name": "blender", "data_files": [{"split": "train", "path": ["data/blender/*", "data/blender_2/*"]}]}, {"config_name": "excel", "data_files": [{"split": "train", "path": "data/excel/*"}]}, {"config_name": "photoshop", "data_files": [{"split": "train", "path": ["data/photoshop/*", "data/photoshop_2/*"]}]}, {"config_name": "salesforce", "data_files": [{"split": "train", "path": "data/salesforce/*"}]}, {"config_name": "vscode", "data_files": [{"split": "train", "path": "data/vscode/*"}]}]}
false
False
2026-03-15T10:57:15
63
63
false
0e8070fd91da79a4e734bfb4c912602c68ce8e45
Computer Use Large A large-scale dataset of 48,478 screen recording videos (~12,300 hours) of professional software being used, sourced from the internet. All videos have been trimmed to remove non-screen-recording content (intros, outros, talking heads, transitions) and audio has been stripped. Dataset Summary Category Videos Hours AutoCAD 10,059 2,149 Blender 11,493 3,624 Excel 8,111 2,002 Photoshop 10,704 2,060 Salesforce 7,807 2,336 VS Code 304… See the full description on the dataset page: https://huggingface.co/datasets/markov-ai/computer-use-large.
45,733
45,733
[ "task_categories:video-classification", "task_categories:robotics", "language:en", "license:cc-by-4.0", "size_categories:10K<n<100K", "region:us", "screen-recording", "computer-use", "software-tutorials", "gui", "desktop" ]
2026-03-12T07:50:59
null
null
698e4ad0913c4d1f4a64479a
Crownelius/Opus-4.6-Reasoning-3300x
Crownelius
{"license": "apache-2.0"}
false
False
2026-03-15T07:02:24
168
51
false
007a7feac2f4960bf59151945b39484d8748c150
Opus-4.6-Reasoning-3000x (Cleaned) This dataset has been automatically cleaned to remove: Empty or missing responses Responses shorter than 10 characters Refusal responses ("problem is incomplete", "cannot solve", etc.) Responses with no substantive content Responses that just echo the problem Cleaning Report Original rows: 3,305 Clean rows: 2,160 Removed: 1,145 (34.6%) Columns: ['id', 'problem', 'thinking', 'solution', 'difficulty', 'category', 'timestamp', 'hash']… See the full description on the dataset page: https://huggingface.co/datasets/Crownelius/Opus-4.6-Reasoning-3300x.
2,163
2,180
[ "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-12T21:49:04
null
null
6988f3d2dd11cee339d8c40b
karpathy/tinystories-gpt4-clean
karpathy
{"license": "cdla-sharing-1.0"}
false
False
2026-02-08T21:07:28
51
44
false
0397e27157956705a0260709da3095bb9c43d6a7
TinyStories GPT-4 Clean A cleaned subset of the TinyStories dataset (Eldan & Li, 2023), keeping only GPT-4-generated stories. Adapted from this thread that pointed out many issues with the original data and proposed a cleaning process. Overview This cleaned dataset contains: Stat Value Stories 2,732,634 Total characters ~2.19B Min doc length 115 chars Max doc length 4,433 chars Median doc length 721 chars Unique characters 74 (ASCII only) Duplicates… See the full description on the dataset page: https://huggingface.co/datasets/karpathy/tinystories-gpt4-clean.
1,869
1,896
[ "license:cdla-sharing-1.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2305.07759", "region:us" ]
2026-02-08T20:36:34
null
null
69a5b45a59ca5dda6cff15a9
TuringEnterprises/Open-RL
TuringEnterprises
{"license": "mit", "language": ["en"], "tags": ["chemistry", "physics", "math", "biology", "science"], "pretty_name": "open-rl", "size_categories": ["n<1K"], "task_categories": ["question-answering"]}
false
False
2026-03-04T11:24:40
175
37
false
cef3b89150d73474ec6b9203897ce2d8d2dcd2bf
Open-RL Dataset Summary This dataset contains self-contained, verifiable, and unambiguous STEM reasoning problems across Physics, Mathematics, Biology, and Chemistry. Each problem: Requires multi-step reasoning Involves symbolic manipulation and/or numerical computation Has a deterministic, objectively verifiable final answer The problems were evaluated against contemporary large language models. Observed pass rates indicate that the tasks are non-trivial yet… See the full description on the dataset page: https://huggingface.co/datasets/TuringEnterprises/Open-RL.
12,249
12,249
[ "task_categories:question-answering", "language:en", "license:mit", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "chemistry", "physics", "math", "biology", "science" ]
2026-03-02T16:01:30
null
null
69afdb9aea6ad7cbfa28b5fe
ginigen-ai/smol-worldcup
ginigen-ai
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "shift_axis", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "subcategory", "dtype": "string"}, {"name": "difficulty", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "answer_key", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "grading_rule", "dtype": "string"}, {"name": "auto_grade", "dtype": "string"}, {"name": "max_score", "dtype": "int64"}, {"name": "anchor", "dtype": "bool"}, {"name": "season", "dtype": "int64"}, {"name": "version", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_name", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 125}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "smol_worldcup_s1.jsonl"}]}], "license": "apache-2.0", "task_categories": ["text-generation", "question-answering"], "language": ["en", "ko", "ar", "pt", "tr", "bn", "th"], "tags": ["benchmark", "small-language-models", "SHIFT-framework", "WCS", "honesty", "hallucination-detection", "smol-ai-worldcup", "evaluation", "multilingual", "edge-ai", "PIR"], "pretty_name": "\ud83c\udfdf\ufe0f Smol AI WorldCup \u2014 SHIFT Benchmark", "size_categories": ["n<1K"], "models": ["meta-llama/Llama-3.2-1B-Instruct", "Qwen/Qwen3-1.7B", "openai/gpt-oss-20b", "CohereLabs/tiny-aya-fire", "Qwen/Qwen3-4B-Instruct-2507", "google/gemma-3n-E4B-it", "zai-org/GLM-4.7-Flash", "mistralai/Mistral-7B-Instruct-v0.2", "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "Qwen/Qwen3-8B", "meta-llama/Llama-3.1-8B-Instruct", "nvidia/Llama-3.1-Nemotron-Nano-8B-v1", "Qwen/Qwen3.5-9B", "allenai/Olmo-3-7B-Instruct", "google/gemma-3-12b-it", "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "Qwen/Qwen3.5-35B-A3B", "meta-llama/Llama-4-Scout-17B-16E-Instruct"]}
false
False
2026-03-10T14:47:44
32
32
false
a304802ece2692d2beb3b3a62bf67c50b7f3c60b
🏟️ Smol AI WorldCup — SHIFT Benchmark The world's first 5-axis evaluation framework for small language models. Not just "how smart?" — but "how honest? how fast? how small? how efficient?" 🏟️ Leaderboard huggingface.co/spaces/ginigen-ai/smol-worldcup 📊 Dataset huggingface.co/datasets/ginigen-ai/smol-worldcup 🏅 ALL Bench huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard 🏆 Official Ranking: WCS (WorldCup Score) WCS = √( SHIFT × PIR_norm )… See the full description on the dataset page: https://huggingface.co/datasets/ginigen-ai/smol-worldcup.
1,514
1,514
[ "task_categories:text-generation", "task_categories:question-answering", "language:en", "language:ko", "language:ar", "language:pt", "language:tr", "language:bn", "language:th", "license:apache-2.0", "size_categories:n<1K", "format:json", "modality:tabular", "modality:text", "library:dat...
2026-03-10T08:51:38
null
null
69a5c92559ca5dda6c00b2f8
Jackrong/Qwen3.5-reasoning-700x
Jackrong
{"license": "apache-2.0", "language": ["en"], "tags": ["reasoning", "math", "distillation", "instruction-tuning", "chain-of-thought", "qwen", "qwen3.5"], "task_categories": ["question-answering"], "size_categories": ["n<1K"]}
false
False
2026-03-02T17:44:52
41
30
false
1b6c703da5319ded200d9e7c91e0b57b4a7c922c
Dataset Card (Qwen3.5-reasoning-700x) Dataset Summary Qwen3.5-reasoning-700x is a high-quality distilled dataset. This dataset uses the high-quality instructions constructed by Alibaba-Superior-Reasoning-Stage2 as the seed question set. By calling the latest Qwen3.5-27B full-parameter model on the Alibaba Cloud DashScope platform as the teacher model, it generates high-quality responses featuring long-text reasoning processes (Chain-of-Thought). It covers several major… See the full description on the dataset page: https://huggingface.co/datasets/Jackrong/Qwen3.5-reasoning-700x.
756
756
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "reasoning", "math", "distillation", "instruction-tuning", "cha...
2026-03-02T17:30:13
null
null
696e2528357a40707550b1c4
google/WaxalNLP
google
{"language_creators": ["creator_1"], "language": ["ach", "aka", "amh", "bau", "dag", "dga", "ewe", "fat", "ful", "hau", "ibo", "kik", "kpo", "lin", "lug", "luo", "mas", "mlg", "nyn", "orm", "pcm", "sid", "sna", "sog", "swa", "tir", "twi", "wal", "yor"], "license": ["cc-by-sa-4.0", "cc-by-4.0"], "multilinguality": ["multilingual"], "source_datasets": ["UGSpeechData", "DigitalUmuganda/AfriVoice", "original"], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "pretty_name": "Waxal NLP Datasets", "arxiv": 2602.02734, "annotation_creators": ["human-annotated", "crowdsourced"], "tags": ["audio", "automatic-speech-recognition", "text-to-speech"], "configs": [{"config_name": "ach_asr", "data_files": [{"split": "train", "path": "data/ASR/ach/ach-train-*"}, {"split": "validation", "path": "data/ASR/ach/ach-validation-*"}, {"split": "test", "path": "data/ASR/ach/ach-test-*"}, {"split": "unlabeled", "path": "data/ASR/ach/ach-unlabeled-*"}]}, {"config_name": "ach_tts", "data_files": [{"split": "train", "path": "data/TTS/ach/ach-train-*"}, {"split": "validation", "path": "data/TTS/ach/ach-validation-*"}, {"split": "test", "path": "data/TTS/ach/ach-test-*"}]}, {"config_name": "aka_asr", "data_files": [{"split": "train", "path": "data/ASR/aka/aka-train-*"}, {"split": "validation", "path": "data/ASR/aka/aka-validation-*"}, {"split": "test", "path": "data/ASR/aka/aka-test-*"}, {"split": "unlabeled", "path": "data/ASR/aka/aka-unlabeled-*"}]}, {"config_name": "amh_asr", "data_files": [{"split": "train", "path": "data/ASR/amh/amh-train-*"}, {"split": "validation", "path": "data/ASR/amh/amh-validation-*"}, {"split": "test", "path": "data/ASR/amh/amh-test-*"}, {"split": "unlabeled", "path": "data/ASR/amh/amh-unlabeled-*"}]}, {"config_name": "bau_tts", "data_files": [{"split": "train", "path": "data/TTS/bau/bau-train-*"}, {"split": "validation", "path": "data/TTS/bau/bau-validation-*"}, {"split": "test", "path": "data/TTS/bau/bau-test-*"}]}, {"config_name": "dag_asr", "data_files": [{"split": "train", "path": "data/ASR/dag/dag-train-*"}, {"split": "validation", "path": "data/ASR/dag/dag-validation-*"}, {"split": "test", "path": "data/ASR/dag/dag-test-*"}, {"split": "unlabeled", "path": "data/ASR/dag/dag-unlabeled-*"}]}, {"config_name": "dga_asr", "data_files": [{"split": "train", "path": "data/ASR/dga/dga-train-*"}, {"split": "validation", "path": "data/ASR/dga/dga-validation-*"}, {"split": "test", "path": "data/ASR/dga/dga-test-*"}, {"split": "unlabeled", "path": "data/ASR/dga/dga-unlabeled-*"}]}, {"config_name": "ewe_asr", "data_files": [{"split": "train", "path": "data/ASR/ewe/ewe-train-*"}, {"split": "validation", "path": "data/ASR/ewe/ewe-validation-*"}, {"split": "test", "path": "data/ASR/ewe/ewe-test-*"}, {"split": "unlabeled", "path": "data/ASR/ewe/ewe-unlabeled-*"}]}, {"config_name": "ewe_tts", "data_files": [{"split": "train", "path": "data/TTS/ewe/ewe-train-*"}, {"split": "validation", "path": "data/TTS/ewe/ewe-validation-*"}, {"split": "test", "path": "data/TTS/ewe/ewe-test-*"}]}, {"config_name": "fat_tts", "data_files": [{"split": "train", "path": "data/TTS/fat/fat-train-*"}, {"split": "validation", "path": "data/TTS/fat/fat-validation-*"}, {"split": "test", "path": "data/TTS/fat/fat-test-*"}]}, {"config_name": "ful_asr", "data_files": [{"split": "train", "path": "data/ASR/ful/ful-train-*"}, {"split": "validation", "path": "data/ASR/ful/ful-validation-*"}, {"split": "test", "path": "data/ASR/ful/ful-test-*"}, {"split": "unlabeled", "path": "data/ASR/ful/ful-unlabeled-*"}]}, {"config_name": "ful_tts", "data_files": [{"split": "train", "path": "data/TTS/ful/ful-train-*"}, {"split": "validation", "path": "data/TTS/ful/ful-validation-*"}, {"split": "test", "path": "data/TTS/ful/ful-test-*"}]}, {"config_name": "hau_tts", "data_files": [{"split": "train", "path": "data/TTS/hau/hau-train-*"}, {"split": "validation", "path": "data/TTS/hau/hau-validation-*"}, {"split": "test", "path": "data/TTS/hau/hau-test-*"}]}, {"config_name": "ibo_tts", "data_files": [{"split": "train", "path": "data/TTS/ibo/ibo-train-*"}, {"split": "validation", "path": "data/TTS/ibo/ibo-validation-*"}, {"split": "test", "path": "data/TTS/ibo/ibo-test-*"}]}, {"config_name": "kik_tts", "data_files": [{"split": "train", "path": "data/TTS/kik/kik-train-*"}, {"split": "validation", "path": "data/TTS/kik/kik-validation-*"}, {"split": "test", "path": "data/TTS/kik/kik-test-*"}]}, {"config_name": "kpo_asr", "data_files": [{"split": "train", "path": "data/ASR/kpo/kpo-train-*"}, {"split": "validation", "path": "data/ASR/kpo/kpo-validation-*"}, {"split": "test", "path": "data/ASR/kpo/kpo-test-*"}, {"split": "unlabeled", "path": "data/ASR/kpo/kpo-unlabeled-*"}]}, {"config_name": "lin_asr", "data_files": [{"split": "train", "path": "data/ASR/lin/lin-train-*"}, {"split": "validation", "path": "data/ASR/lin/lin-validation-*"}, {"split": "test", "path": "data/ASR/lin/lin-test-*"}, {"split": "unlabeled", "path": "data/ASR/lin/lin-unlabeled-*"}]}, {"config_name": "lug_asr", "data_files": [{"split": "train", "path": "data/ASR/lug/lug-train-*"}, {"split": "validation", "path": "data/ASR/lug/lug-validation-*"}, {"split": "test", "path": "data/ASR/lug/lug-test-*"}, {"split": "unlabeled", "path": "data/ASR/lug/lug-unlabeled-*"}]}, {"config_name": "lug_tts", "data_files": [{"split": "train", "path": "data/TTS/lug/lug-train-*"}, {"split": "validation", "path": "data/TTS/lug/lug-validation-*"}, {"split": "test", "path": "data/TTS/lug/lug-test-*"}]}, {"config_name": "luo_tts", "data_files": [{"split": "train", "path": "data/TTS/luo/luo-train-*"}, {"split": "validation", "path": "data/TTS/luo/luo-validation-*"}, {"split": "test", "path": "data/TTS/luo/luo-test-*"}]}, {"config_name": "mas_asr", "data_files": [{"split": "train", "path": "data/ASR/mas/mas-train-*"}, {"split": "validation", "path": "data/ASR/mas/mas-validation-*"}, {"split": "test", "path": "data/ASR/mas/mas-test-*"}, {"split": "unlabeled", "path": "data/ASR/mas/mas-unlabeled-*"}]}, {"config_name": "mlg_asr", "data_files": [{"split": "train", "path": "data/ASR/mlg/mlg-train-*"}, {"split": "validation", "path": "data/ASR/mlg/mlg-validation-*"}, {"split": "test", "path": "data/ASR/mlg/mlg-test-*"}, {"split": "unlabeled", "path": "data/ASR/mlg/mlg-unlabeled-*"}]}, {"config_name": "nyn_asr", "data_files": [{"split": "train", "path": "data/ASR/nyn/nyn-train-*"}, {"split": "validation", "path": "data/ASR/nyn/nyn-validation-*"}, {"split": "test", "path": "data/ASR/nyn/nyn-test-*"}, {"split": "unlabeled", "path": "data/ASR/nyn/nyn-unlabeled-*"}]}, {"config_name": "nyn_tts", "data_files": [{"split": "train", "path": "data/TTS/nyn/nyn-train-*"}, {"split": "validation", "path": "data/TTS/nyn/nyn-validation-*"}, {"split": "test", "path": "data/TTS/nyn/nyn-test-*"}]}, {"config_name": "orm_asr", "data_files": [{"split": "train", "path": "data/ASR/orm/orm-train-*"}, {"split": "validation", "path": "data/ASR/orm/orm-validation-*"}, {"split": "test", "path": "data/ASR/orm/orm-test-*"}, {"split": "unlabeled", "path": "data/ASR/orm/orm-unlabeled-*"}]}, {"config_name": "pcm_tts", "data_files": [{"split": "train", "path": "data/TTS/pcm/pcm-train-*"}, {"split": "validation", "path": "data/TTS/pcm/pcm-validation-*"}, {"split": "test", "path": "data/TTS/pcm/pcm-test-*"}]}, {"config_name": "sid_asr", "data_files": [{"split": "train", "path": "data/ASR/sid/sid-train-*"}, {"split": "validation", "path": "data/ASR/sid/sid-validation-*"}, {"split": "test", "path": "data/ASR/sid/sid-test-*"}, {"split": "unlabeled", "path": "data/ASR/sid/sid-unlabeled-*"}]}, {"config_name": "sna_asr", "data_files": [{"split": "train", "path": "data/ASR/sna/sna-train-*"}, {"split": "validation", "path": "data/ASR/sna/sna-validation-*"}, {"split": "test", "path": "data/ASR/sna/sna-test-*"}, {"split": "unlabeled", "path": "data/ASR/sna/sna-unlabeled-*"}]}, {"config_name": "tir_asr", "data_files": [{"split": "train", "path": "data/ASR/tir/tir-train-*"}, {"split": "validation", "path": "data/ASR/tir/tir-validation-*"}, {"split": "test", "path": "data/ASR/tir/tir-test-*"}, {"split": "unlabeled", "path": "data/ASR/tir/tir-unlabeled-*"}]}, {"config_name": "sog_asr", "data_files": [{"split": "train", "path": "data/ASR/sog/sog-train-*"}, {"split": "validation", "path": "data/ASR/sog/sog-validation-*"}, {"split": "test", "path": "data/ASR/sog/sog-test-*"}, {"split": "unlabeled", "path": "data/ASR/sog/sog-unlabeled-*"}]}, {"config_name": "swa_tts", "data_files": [{"split": "train", "path": "data/TTS/swa/swa-train-*"}, {"split": "validation", "path": "data/TTS/swa/swa-validation-*"}, {"split": "test", "path": "data/TTS/swa/swa-test-*"}]}, {"config_name": "twi_tts", "data_files": [{"split": "train", "path": "data/TTS/twi/twi-train-*"}, {"split": "validation", "path": "data/TTS/twi/twi-validation-*"}, {"split": "test", "path": "data/TTS/twi/twi-test-*"}]}, {"config_name": "yor_tts", "data_files": [{"split": "train", "path": "data/TTS/yor/yor-train-*"}, {"split": "validation", "path": "data/TTS/yor/yor-validation-*"}, {"split": "test", "path": "data/TTS/yor/yor-test-*"}]}, {"config_name": "wal_asr", "data_files": [{"split": "train", "path": "data/ASR/wal/wal-train-*"}, {"split": "validation", "path": "data/ASR/wal/wal-validation-*"}, {"split": "test", "path": "data/ASR/wal/wal-test-*"}, {"split": "unlabeled", "path": "data/ASR/wal/wal-unlabeled-*"}]}], "dataset_info": [{"config_name": "ach_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ach_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "aka_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "amh_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "bau_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "dag_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "dga_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ewe_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ewe_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "fat_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ful_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "fuf_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ful_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "hau_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ibo_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "kik_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "kpo_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lin_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lug_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lug_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "luo_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "mas_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "mlg_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "nyn_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "nyn_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "orm_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "pcm_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sid_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sna_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sog_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "swa_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "tir_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "twi_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "wal_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "yor_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}]}
false
False
2026-03-13T11:58:41
200
29
false
beab143ae6d8a5e054281241afd76565ecb57e03
Waxal Datasets The WAXAL dataset is a large-scale multilingual speech corpus for African languages, introduced in the paper WAXAL: A Large-Scale Multilingual African Language Speech Corpus. Dataset Description The Waxal project provides datasets for both Automated Speech Recognition (ASR) and Text-to-Speech (TTS) for African languages. The goal of this dataset's creation and release is to facilitate research that improves the accuracy and fluency of speech and language… See the full description on the dataset page: https://huggingface.co/datasets/google/WaxalNLP.
10,499
19,491
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "language_creators:creator_1", "multilinguality:multilingual", "source_datasets:UGSpeechData", "source_datasets:DigitalUmuganda/AfriVoice", "source_datasets:original", "language:ach", "language:aka", "language:amh", ...
2026-01-19T12:35:52
null
null
69af21616259df956494b1ce
yatin-superintelligence/Edge-Agent-Reasoning-WebSearch-260K
yatin-superintelligence
{"pretty_name": "Edge Agent Reasoning WebSearch 260K", "license": "mit", "language": ["en"], "library_name": "datasets", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "question-answering", "any-to-any", "robotics"], "tags": ["text", "3d", "image", "synthetic", "agentic", "reasoning", "RAG", "system-2", "chain-of-thought", "web-search", "document", "edge-ai", "tool-use", "software", "engineering", "code", "legal", "medical", "healthcare", "biology", "chemistry", "finance", "science", "climate", "art", "design", "music", "audio", "video", "agent", "datasets", "parquet", "pandas", "polars", "dask"], "dataset_info": {"features": [{"name": "batch_index_id", "dtype": "int64"}, {"name": "role", "dtype": "string"}, {"name": "industry", "dtype": "string"}, {"name": "os", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}, {"name": "agent_reasoning", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 712900000, "num_examples": 263098}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "edge_reasoning_train_*.parquet"}]}]}
false
False
2026-03-13T21:42:21
28
28
false
7e8e455fff52e6d21dce4ce4a5a1bddd13031e1a
Edge Agent Reasoning WebSearch 260K Abstract The Edge-Agent-Reasoning-WebSearch-260K dataset is a massive, synthetically expert-engineered corpus of over 700 Million tokens, designed to train small, local models (SLMs) and edge-deployed agents in advanced problem deconstruction and self-aware reasoning. Rather than training a model to execute instructions directly—which often leads to hallucinations when context is missing—this dataset trains a model to act as a… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Edge-Agent-Reasoning-WebSearch-260K.
1,584
1,584
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:any-to-any", "task_categories:robotics", "language:en", "license:mit", "size_categories:100K<n<1M", "format:parquet", "modality:text", "modality:3d", "modality:image", "modality:document", "modality:aud...
2026-03-09T19:37:05
null
null
698a9b89700a694a5b97db6f
AudioVisual-Caption/ASID-1M
AudioVisual-Caption
{"license": "cc-by-2.0", "language": ["en"], "pretty_name": "ASID-1M", "tags": ["caption", "audiovisual", "instruction-tuning", "attribute-structured", "quality-verified", "video-understanding"], "task_categories": ["image-text-to-text"], "configs": [{"config_name": "all_attributes", "data_files": [{"split": "train", "path": ["annotations/0_30_s_youtube_v0_1/train/all_attributes_0_30_s_youtube_v0_1.jsonl", "annotations/30_60_s_youtube_v0_1/train/all_attributes_30_60_s_youtube_v0_1.jsonl", "annotations/1_2_m_youtube_v0_1/train/all_attributes_1_2_m_youtube_v0_1.jsonl", "annotations/finevideo/train/all_attributes_finevideo.jsonl"]}]}, {"config_name": "single_attribute", "data_files": [{"split": "train", "path": ["annotations/0_30_s_youtube_v0_1/train/single_attribute_0_30_s_youtube_v0_1.jsonl", "annotations/30_60_s_youtube_v0_1/train/single_attribute_30_60_s_youtube_v0_1.jsonl", "annotations/1_2_m_youtube_v0_1/train/single_attribute_1_2_m_youtube_v0_1.jsonl", "annotations/finevideo/train/single_attribute_finevideo.jsonl"]}]}]}
false
False
2026-03-11T12:26:08
70
26
false
209550390d32c41cb138a8503f82a663a4da357d
ASID-1M: Attribute-Structured and Quality-Verified Audiovisual Instructions [🏠 Homepage] [📖 Arxiv Paper] [🤗 Models & Datasets] [💻 Code] Introduction We introduce ASID-1M, a large-scale audiovisual instruction dataset built to support universal video understanding with fine-grained, controllable supervision. Most existing video-instruction data represents complex audiovisual content as a single, monolithic caption. This often leads to incomplete coverage (missing audio… See the full description on the dataset page: https://huggingface.co/datasets/AudioVisual-Caption/ASID-1M.
2,009
2,070
[ "task_categories:image-text-to-text", "language:en", "license:cc-by-2.0", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2602.13013", "region:us", "caption", "audiovisual", "instruction-tu...
2026-02-10T02:44:25
null
null
69a7282144067eabb6017453
ronantakizawa/github-codereview
ronantakizawa
{"license": "other", "task_categories": ["text-generation"], "language": ["en", "code"], "tags": ["code-review", "code-generation", "software-engineering", "pull-requests", "github"], "size_categories": ["100K<n<1M"]}
false
False
2026-03-10T00:59:34
35
26
false
c3e3c6e7e9f61e3e7a5b52894bcd440d586ae6ca
Code Review Dataset A large-scale dataset of the best human-written code reviews from top GitHub repositories. Each row captures a moment where a human code reviewer left an inline comment on a pull request, and the author subsequently modified the code in response. The dataset also includes negative examples — code from the same PRs that passed review without comments — to help models learn when code is acceptable. This provides a natural signal for training models to: Generate… See the full description on the dataset page: https://huggingface.co/datasets/ronantakizawa/github-codereview.
360
360
[ "task_categories:text-generation", "language:en", "language:code", "license:other", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us", "code-review", "code-generatio...
2026-03-03T18:27:45
null
null
67e4291146baf23164358d53
nvidia/Nemotron-ClimbMix
nvidia
{"language": ["en"], "license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": "*.jsonl"}]}
false
False
2025-10-21T15:05:35
83
18
false
5eaa64b9c0c85b7f56af01d7dffdb0795816b12b
ClimbMix Dataset 🚀 Creating the highest-quality pre-training datasets for LLMs 🌟 📄 PAPER 🤗 CLIMBLAB 🤗 CLIMBMIX 🏠 HOMEPAGE Figure 1: Continuously training a 1B model yields a 2.0% improvement over Llama-3.2-1B, demonstrating a more efficient scaling trend compared to prior models. Figure 2: Pre-training a 1B model from scratch on ClimbMix shows better scaling effects than training on other datasets.… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-ClimbMix.
10,149
38,689
[ "task_categories:text-generation", "language:en", "license:cc-by-nc-4.0", "size_categories:100M<n<1B", "format:json", "modality:tabular", "library:datasets", "library:dask", "library:mlcroissant", "arxiv:2504.13161", "region:us" ]
2025-03-26T16:19:29
null
null
6996711477c275fd9adb7137
nvidia/Nemotron-Terminal-Corpus
nvidia
{"license": "cc-by-4.0", "task_categories": ["question-answering"], "language": ["en"], "tags": ["code"], "size_categories": ["100K<n<1M"], "configs": [{"config_name": "dataset_adapters", "data_files": [{"split": "train", "path": "dataset_adapters/*.parquet"}]}, {"config_name": "skill_based_easy", "data_files": [{"split": "train", "path": "synthetic_tasks/skill_based/easy/*/data_filtered.parquet"}]}, {"config_name": "skill_based_medium", "data_files": [{"split": "train", "path": "synthetic_tasks/skill_based/medium/*/data_filtered.parquet"}]}, {"config_name": "skill_based_mixed", "data_files": [{"split": "train", "path": "synthetic_tasks/skill_based/mixed/*/data_filtered.parquet"}]}]}
false
False
2026-02-27T22:37:57
95
17
false
a1667c4ffdadea02a89bffe4f1bb7ca2ff19f8d9
Terminal-Corpus: Large-Scale SFT Dataset for Terminal Agents Terminal-Corpus is a large-scale Supervised Fine-Tuning (SFT) dataset designed to scale the terminal interaction capabilities of Large Language Models (LLMs). Developed by NVIDIA, this dataset was built using the Terminal-Task-Gen pipeline, which combines dataset adaptation with synthetic task generation across diverse domains. 🚀 Key Results & Performance The high-quality trajectories in Terminal-Corpus enable… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Terminal-Corpus.
2,544
2,544
[ "task_categories:question-answering", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2602.21193", "region:us", "code" ]
2026-02-19T02:10:28
null
null
69981ebb8794c09b40ce6b1e
Oatmealliu/UrbanVerse-100K
Oatmealliu
{"license": "odc-by", "language": ["en"], "pretty_name": "UrbanVerse-100K", "size_categories": ["100K<n<1M"], "task_categories": ["robotics", "text-to-3d", "image-to-3d", "reinforcement-learning", "image-to-text", "text-to-image"], "tags": ["3d", "Robotics", "PhysicalAI", "EmbodiedAI", "Objects", "3DAssets", "UrbanSimulation", "IsaacSim", "IsaacLab"], "extra_gated_fields": {"Full Name": "text", "Email Address": "text", "Country": "country", "Institution": "text", "Sector of Institution": {"type": "select", "options": ["Academic/Education", "Corporation", "Startup", "Government", "Non-profit Organization", "Individual", "Other"]}, "Purpose": {"type": "select", "options": ["Embodied AI", "Physical AI", "3D Generation", "Reinforcement Learning", "Imitation Learning", "Computer Vision", "Autonomous Driving", "Generative Models", "Multimodal Large Language Models", "Visual Question Answering"]}, "I accept the conditions and licenses of the files contained in this dataset": "checkbox"}}
false
manual
2026-03-11T10:40:11
16
16
false
5625b8038308e5c25320da1d1ddc952f8a291686
UrbanVerse-100K Dataset [!NOTE] UrbanVerse-100K is a large-scale, physics-aware 3D asset and material database curated for urban simulation, physical and embodied AI research. It contains over 102K metric-scale urban object assets (GLB/USD), along with 646 4K sky maps (HDR) and 403 4K ground (road/sidewalk/terrain) materials (MDL), each annotated with rich semantic and physical attributes. The dataset is IsaacSim-ready, enabling scalable construction of realistic urban… See the full description on the dataset page: https://huggingface.co/datasets/Oatmealliu/UrbanVerse-100K.
10,708
10,708
[ "task_categories:robotics", "task_categories:text-to-3d", "task_categories:image-to-3d", "task_categories:reinforcement-learning", "task_categories:image-to-text", "task_categories:text-to-image", "language:en", "license:odc-by", "size_categories:100K<n<1M", "modality:3d", "arxiv:2510.15018", ...
2026-02-20T08:43:39
null
null
69a70420de30b37a2f37ccca
karpathy/climbmix-400b-shuffle
karpathy
{"license": "mit"}
false
False
2026-03-03T17:02:01
18
15
false
915333b4f8b8684f39aeaafea600fea6f43fb703
null
27,840
27,840
[ "license:mit", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us" ]
2026-03-03T15:54:08
null
null
69af2c4fe58f63b685b08d5c
yatin-superintelligence/Creative-Professionals-Agentic-Tasks-1M
yatin-superintelligence
{"pretty_name": "Creative Professionals Agentic Tasks (1M)", "language": ["en"], "license": "mit", "library_name": "datasets", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "question-answering", "any-to-any"], "tags": ["text", "audio", "video", "3d", "image", "art", "music", "code", "agent", "agentic-tasks", "frontend-development", "ui-ux-design", "game-ui", "3d-animation", "cgi", "vfx", "video-editing", "nonlinear-editing", "music-production", "audio-engineering", "sound-design", "brand-design", "photo-editing", "tool-use", "synthetic", "datasets", "parquet", "pandas", "polars", "dask"], "dataset_info": {"features": [{"name": "batch_id", "dtype": "int64"}, {"name": "index_id", "dtype": "int64"}, {"name": "professional", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 1070930}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "creative_pro_tasks_train_*.parquet"}]}]}
false
False
2026-03-13T14:45:14
15
15
false
620e36077ad9325ef19ae8caa4389175272b1c41
Creative Professionals Agentic Tasks (1M) Abstract A massive-scale, high-fidelity synthetic task dataset comprising 1,070,917 agentic command operations across 36 creative, technical, and engineering software environments. This dataset is engineered exclusively to stress-test, evaluate, and fine-tune multimodal AI agents designed for Agent Environment operation, complex software interaction, and multi-step reasoning within deep software infrastructures.… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Creative-Professionals-Agentic-Tasks-1M.
959
959
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:any-to-any", "language:en", "license:mit", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "modality:audio", "modality:video", "modality:3d", "modality:image", "libr...
2026-03-09T20:23:43
null
null
625552d2b339bb03abe3432d
openai/gsm8k
openai
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": [], "paperswithcode_id": "gsm8k", "pretty_name": "Grade School Math 8K", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "main", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 7473}, {"name": "test", "num_bytes": 713732, "num_examples": 1319}], "download_size": 2725633, "dataset_size": 4676934}, {"config_name": "socratic", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5198108, "num_examples": 7473}, {"name": "test", "num_bytes": 936859, "num_examples": 1319}], "download_size": 3164254, "dataset_size": 6134967}], "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}]}, {"config_name": "socratic", "data_files": [{"split": "train", "path": "socratic/train-*"}, {"split": "test", "path": "socratic/test-*"}]}]}
false
False
2025-12-20T18:53:44
1,197
14
false
cc7b047b6e5bb11b4f1af84efc572db110a51b3c
Dataset Card for GSM8K Dataset Summary GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning. These problems take between 2 and 8 steps to solve. Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the… See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k.
607,542
9,692,671
[ "benchmark:official", "task_categories:text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:dat...
2022-04-12T10:22:10
gsm8k
null
6969d8ba29be2bd1483adfb7
nvidia/Nemotron-Pretraining-Specialized-v1.1
nvidia
{"license": "cc-by-4.0", "task_categories": ["text-generation"], "track_downloads": true, "configs": [{"config_name": "Nemotron-Pretraining-Formal-Logic", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Formal-Logic/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Economics", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Economics/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Multiple-Choice", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Multiple-Choice/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Unconditional-Algorithmic", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Unconditional-Algorithmic/*.parquet"}]}, {"config_name": "Nemotron-Pretraining-Code-Concepts", "data_files": [{"split": "train", "path": "Nemotron-Pretraining-Code-Concepts/*.parquet"}]}]}
false
False
2026-03-11T14:43:59
14
14
false
13fa979be2e7f7e62913eee0ec5e97c8fd6e24af
Nemotron-Pretraining-Specialized-v1.1 Dataset Description: The Nemotron-Pretraining-Specialized-v1.1 dataset is part of the Nemotron Pretraining Data collection of pretraining datasets. Designed for the NVIDIA Nemotron 3 family of LLMs, this dataset contains a collection of synthetic datasets aimed to improve LLM capabilities in code concepts and algorithms, formal logic, economics, and multiple choice questions. The code concepts dataset is an instance of a general… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Specialized-v1.1.
703
703
[ "task_categories:text-generation", "license:cc-by-4.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us" ]
2026-01-16T06:20:42
null
null
656523d6bfb751371817c448
Idavidrein/gpqa
Idavidrein
{"license": "cc-by-4.0", "viewer": true, "extra_gated_prompt": "You agree to NOT reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation model training corpora.", "extra_gated_fields": {"I accept these terms": "checkbox"}, "configs": [{"config_name": "gpqa_extended", "data_files": "gpqa_extended.csv"}, {"config_name": "gpqa_main", "data_files": "gpqa_main.csv"}, {"config_name": "gpqa_diamond", "data_files": "gpqa_diamond.csv"}, {"config_name": "gpqa_experts", "data_files": "gpqa_experts.csv"}], "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["open-domain-qa", "open-book-qa", "multiple-choice-qa"], "pretty_name": "GPQA", "size_categories": ["n<1K"]}
false
auto
2026-03-05T23:06:58
386
13
false
633f5ee89ab8ad4522a9f850766b73f62147ffdd
Dataset Card for GPQA GPQA is a multiple-choice, Q&A dataset of very hard questions written and validated by experts in biology, physics, and chemistry. When attempting questions out of their own domain (e.g., a physicist answers a chemistry question), these experts get only 34% accuracy, despite spending >30m with full access to Google. We request that you do not reveal examples from this dataset in plain text or images online, to reduce the risk of leakage into foundation model… See the full description on the dataset page: https://huggingface.co/datasets/Idavidrein/gpqa.
104,901
1,442,997
[ "benchmark:official", "benchmark:eval-yaml", "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:polars", "...
2023-11-27T23:18:46
null
null
6655eb19d17e141dcb546ed5
HuggingFaceFW/fineweb-edu
HuggingFaceFW
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb-Edu", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}], "features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}, {"name": "token_count", "dtype": "int64"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
false
False
2025-07-11T20:16:53
990
13
false
87f09149ef4734204d70ed1d046ddc9ca3f2b8f9
📚 FineWeb-Edu 1.3 trillion tokens of the finest educational data the 🌐 web has to offer Paper: https://arxiv.org/abs/2406.17557 What is it? 📚 FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version. To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We then… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu.
221,797
6,122,388
[ "task_categories:text-generation", "language:en", "license:odc-by", "size_categories:1B<n<10B", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2406.17557", "arxiv:2404.14219", "arxiv:2401.10020", ...
2024-05-28T14:32:57
null
null
699e0810251cac84be7d52ba
peteromallet/dataclaw-peteromallet
peteromallet
{"license": "mit", "task_categories": ["text-generation"], "language": ["en"], "tags": ["dataclaw", "claude-code", "codex-cli", "conversations", "coding-assistant", "tool-use", "agentic-coding", "claude-haiku-4-5-20251001", "claude-opus-4-5-20251101", "claude-opus-4-6", "claude-sonnet-4-5-20250929", "claude-sonnet-4-6"], "pretty_name": "Coding Agent Conversations", "configs": [{"config_name": "default", "data_files": "conversations.jsonl"}]}
false
False
2026-02-25T16:14:13
291
13
false
b925056b0539a8bd28a06417dca464aac6ba7bdb
Coding Agent Conversation Logs This is a performance art project. Anthropic built their models on the world's freely shared information, then introduced increasingly dystopian data policies to stop anyone else from doing the same — pulling up the ladder behind them. DataClaw lets you throw the ladder back down. The dataset it produces is yours to share. Exported with DataClaw. Tag: dataclaw — Browse all DataClaw datasets Stats Metric Value Sessions 549… See the full description on the dataset page: https://huggingface.co/datasets/peteromallet/dataclaw-peteromallet.
9,888
9,888
[ "task_categories:text-generation", "language:en", "license:mit", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "dataclaw", "claude-code", "codex-cli", "conversations", "coding-assistan...
2026-02-24T20:20:32
null
null
69af2a96484ef491320cc3c1
yatin-superintelligence/Audio-Video-Engineering-Agentic-Tasks-1M
yatin-superintelligence
{"pretty_name": "Audio/Video Engineering Agentic Tasks (1M)", "language": ["en"], "license": "mit", "library_name": "datasets", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "question-answering", "any-to-any"], "tags": ["text", "audio", "video", "music", "art", "media-production", "digital-audio-workstation", "nonlinear-editing", "agent", "agentic-tasks", "music-composition", "music-production", "sound-design", "video-editing", "tool-use", "troubleshooting", "synthetic", "datasets", "parquet", "pandas", "polars", "dask"], "dataset_info": {"features": [{"name": "batch_id", "dtype": "int64"}, {"name": "index", "dtype": "int64"}, {"name": "professional", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "user_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 1031068}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "av_agentic_tasks_train_*.parquet"}]}]}
false
False
2026-03-13T14:45:10
13
13
false
d2157d1d602b06aee35618a4ef841a489e85b3d1
Audio/Video Engineering Agentic Tasks (1M) Abstract A highly specialized dataset comprising 1,029,459 in-context troubleshooting prompts and execution commands built for the deepest levels of media production. Unlike standard datasets that simulate clean, theoretical instructions, this matrix captures the chaotic, highly-detailed, and conversational reality of professional audio engineers, composers, and video editors mid-session. It is engineered to train multimodal AI… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Audio-Video-Engineering-Agentic-Tasks-1M.
384
384
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:any-to-any", "language:en", "license:mit", "size_categories:1M<n<10M", "modality:tabular", "modality:text", "modality:audio", "modality:video", "library:datasets", "library:pandas", "library:polars", ...
2026-03-09T20:16:22
null
null
69b03aa205292d5180b6fc1e
maikezu/dowis
maikezu
{"license": "cc-by-4.0", "language": ["de", "en", "es", "cs", "fr", "hu", "it", "nl", "pt", "ru", "sq", "sv"], "tags": ["speech prompts", "text prompts", "instruction following", "benchmark"], "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "text_prompt", "dtype": "string"}, {"name": "audio_prompt_female_1", "dtype": "audio"}, {"name": "audio_prompt_female_2", "dtype": "audio"}, {"name": "audio_prompt_male_1", "dtype": "audio"}, {"name": "audio_prompt_male_2", "dtype": "audio"}, {"name": "language", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "prompt_type", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2704378267.6, "num_examples": 1320}], "download_size": 1772318018, "dataset_size": 2704378267.6}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
false
False
2026-03-12T09:17:21
13
13
false
40cebb56cbc5145a9c52555939dc0859188ea42b
Do What I Say (DOWIS): A Spoken Prompt Dataset for Instruction-Following NEW DOWIS now also contains spoken and written prompts in Albanian (sq), and for the tasks LIPREAD and SLU! TL;DR — DOWIS is a multilingual dataset of human-recorded spoken and written instruction prompts, designed to enable realistic evaluation of Speech Large Language Models across 11 tasks and 12 languages. Dataset Summary Most Speech LLM benchmarks use text-based prompts, which does… See the full description on the dataset page: https://huggingface.co/datasets/maikezu/dowis.
119
119
[ "language:de", "language:en", "language:es", "language:cs", "language:fr", "language:hu", "language:it", "language:nl", "language:pt", "language:ru", "language:sq", "language:sv", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:audio", "modality:text", ...
2026-03-10T15:37:06
null
null
69a6dc61541e5c55e792dcb6
ai-coustics/dawn_chorus_en
ai-coustics
{"license": "cc-by-nc-4.0", "task_categories": ["audio-to-audio"], "language": ["en"], "tags": ["speech", "foreground-background-speech", "speech-to-text"], "pretty_name": "dawn_chorus_en", "size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "eval", "path": "eval.parquet"}]}], "dataset_info": {"features": [{"name": "mix", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "speech", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcript", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "conversation_type", "dtype": "string"}, {"name": "speech_source", "dtype": "string"}, {"name": "index", "dtype": "int64"}]}}
false
False
2026-03-03T13:06:55
12
12
false
3c21347c1e61ea904a493f9a6b3856161432da80
dawn_chorus_en An open-source evaluation dataset for accurate foreground speaker transcription. The dataset targets mixture conditions where foreground speech remains generally transcribable by speech-to-text systems, while background speech is distinctly perceived as background. It provides around 90 minutes of foreground–background speech mixtures composed of recorded and synthesized foreground speech, along with ground truth foreground speech and corresponding transcripts.… See the full description on the dataset page: https://huggingface.co/datasets/ai-coustics/dawn_chorus_en.
710
710
[ "task_categories:audio-to-audio", "language:en", "license:cc-by-nc-4.0", "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "speech", "foreground-background-speech", "spe...
2026-03-03T13:04:33
null
null
69ab632c9d4152acb2e45fb7
Mustafaege/qwen3.5-toolcalling-v2
Mustafaege
{"language": ["en"], "license": "apache-2.0", "pretty_name": "Qwen3.5 Tool Calling Dataset v2", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["tool-use", "tool-calling", "function-calling", "reasoning", "agentic", "jupyter", "code-execution", "sft", "chat", "qwen3", "qwen3.5", "chain-of-thought", "multi-turn", "structured-output", "json", "fine-tuning", "open-source", "expanded-dataset"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
false
False
2026-03-07T13:06:45
17
12
false
8f0343a5613879fefda0eb002d10ff7150a2c588
Qwen3.5 Tool Calling Dataset v2 An expanded tool-calling SFT dataset combining smirki/Tool-Calling-Dataset-UIGEN-X and AmanPriyanshu/tool-reasoning-sft-jupyter-agent, unified into Qwen3 messages format. Adds Jupyter notebook agent data with code execution reasoning chains. Dataset Summary Property Value Total Samples ~60K+ Train Split ~55K Test Split ~6K Sources UIGEN-X + Jupyter Agent Format Qwen3 messages Language English License Apache 2.0… See the full description on the dataset page: https://huggingface.co/datasets/Mustafaege/qwen3.5-toolcalling-v2.
184
184
[ "task_categories:text-generation", "annotations_creators:machine-generated", "language_creators:found", "language:en", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us...
2026-03-06T23:28:44
null
null
69b208be3fadc91fa277f593
TeichAI/Claude-Opus-Dataclaw-Unredacted
TeichAI
{"language": ["en"], "license": "mit", "task_categories": ["text-generation"]}
false
False
2026-03-15T09:26:27
12
12
false
e66d16caee4660873b7ea8913d004ca23bde1c02
Dataclaw Opus (4.5 & 4.6) Dataset Currently there are still some major issues in the dataset format (i.e. lack of tools responses, and tool call id's), nothing gemini can't fix. I don't recommend using this set until the update is posted. This dataset was assembled by: Collecting all Dataclaw datasets we could find Filtering for Opus-family conversations Normalizing them into a single training format Deduplicating overlapping uploads Using Gemini 3 Flash to replace all the… See the full description on the dataset page: https://huggingface.co/datasets/TeichAI/Claude-Opus-Dataclaw-Unredacted.
94
94
[ "task_categories:text-generation", "language:en", "license:mit", "region:us" ]
2026-03-12T00:28:46
null
null
6928ac839f54f92be8b78d70
TeichAI/claude-4.5-opus-high-reasoning-250x
TeichAI
null
false
False
2025-11-28T03:02:41
326
11
false
742c86f88b66bf53cb5961a25e4360f5582f4a6e
This is a reasoning dataset created using Claude Opus 4.5 with a reasoning depth set to high. Some of these questions are from reedmayhew and the rest were generated. The dataset is meant for creating distilled versions of Claude Opus 4.5 by fine-tuning already existing open-source LLMs. Stats Costs: $ 52.3 (USD) Total tokens (input + output): 2.13 M
3,184
17,737
[ "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-27T19:54:43
null
null
69a0ac7cc1f01f9b6b9031de
BytedTsinghua-SIA/CUDA-Agent-Ops-6K
BytedTsinghua-SIA
{"license": "cc-by-4.0", "pretty_name": "CUDA-Agent-Ops-6K", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "language": ["en"]}
false
False
2026-02-27T19:56:56
56
11
false
44a734c78c947bfcba5189cbfd13f57a6d29a698
CUDA-Agent-Ops-6K CUDA-Agent-Ops-6K is a curated training dataset for CUDA kernel generation and optimization. It is released as part of the CUDA-Agent project: Project Page: https://CUDA-Agent.github.io/ Github Repo: https://github.com/BytedTsinghua-SIA/CUDA-Agent Dataset Summary CUDA-Agent-Ops-6K contains 6,000 synthesized operator-level training tasks designed for large-scale agentic RL training. It is intended to provide diverse and executable CUDA-oriented training… See the full description on the dataset page: https://huggingface.co/datasets/BytedTsinghua-SIA/CUDA-Agent-Ops-6K.
521
521
[ "task_categories:text-generation", "language:en", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-02-26T20:26:36
null
null
69b0e2310b2ac9d1b7534f7e
nvidia/Nemotron-RL-Super-Training-Blends
nvidia
{"license": "cc-by-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "rlvr1", "path": "rlvr1.jsonl"}, {"split": "rlvr2", "path": "rlvr2.jsonl"}, {"split": "rlvr3", "path": "rlvr3.jsonl"}, {"split": "swe1", "path": "swe1.jsonl"}, {"split": "swe2", "path": "swe2.jsonl"}, {"split": "rlhf", "path": "rlhf.jsonl"}]}]}
false
False
2026-03-12T00:22:48
11
11
false
b90f74f1d0bafeec6d1f1321173f6775ba5bda2e
Dataset Description: Nemotron-3-Super-RL-Training-Blends contains the dataset blends used to train the Nemotron-3-Super-120B-A12B model. RL training for the Nemotron-3-Super-120B-A12B model is done in 6 stages: RLVR 1, RLVR 2, RLVR 3, SWE 1, SWE 2, and RLHF. The blends for each stage consist of data from various datasets, which we detail below. The percentages in parentheses indicate the mixing ratios of the dataset components. Note that the model was also trained on additional data… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-RL-Super-Training-Blends.
415
415
[ "license:cc-by-4.0", "region:us" ]
2026-03-11T03:32:01
null
null
66212f29fb07c3e05ad0432e
HuggingFaceFW/fineweb
HuggingFaceFW
{"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2025-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-05/*"}]}, {"config_name": "CC-MAIN-2025-08", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-08/*"}]}, {"config_name": "CC-MAIN-2025-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-13/*"}]}, {"config_name": "CC-MAIN-2025-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-18/*"}]}, {"config_name": "CC-MAIN-2025-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-21/*"}]}, {"config_name": "CC-MAIN-2025-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2025-26/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]}
false
False
2025-07-11T20:16:53
2,700
10
false
9bb295ddab0e05d785b879661af7260fed5140fc
🍷 FineWeb 15 trillion tokens of the finest data the 🌐 web has to offer What is it? The 🍷 FineWeb dataset consists of more than 18.5T tokens (originally 15T tokens) of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library. 🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb.
172,797
6,449,416
[ "task_categories:text-generation", "language:en", "license:odc-by", "size_categories:10B<n<100B", "modality:tabular", "modality:text", "arxiv:2306.01116", "arxiv:2109.07445", "arxiv:2406.17557", "doi:10.57967/hf/2493", "region:us" ]
2024-04-18T14:33:13
null
null
67a404bc8c6d42c5ec097433
Anthropic/EconomicIndex
Anthropic
{"language": "en", "pretty_name": "EconomicIndex", "tags": ["AI", "LLM", "Economic Impacts", "Anthropic"], "viewer": true, "license": "mit", "configs": [{"config_name": "release_2026_01_15", "data_files": [{"split": "raw_claude_ai", "path": "release_2026_01_15/data/intermediate/aei_raw_claude_ai_2025-11-13_to_2025-11-20.csv"}, {"split": "raw_1p_api", "path": "release_2025_09_15/data/intermediate/aei_raw_1p_api_2025-11-13_to_2025-11-20.csv"}]}]}
false
False
2026-03-11T05:02:11
477
10
false
d1001170819fe03262c168fcf77ae99a5abf9576
The Anthropic Economic Index Overview The Anthropic Economic Index provides insights into how AI is being incorporated into real-world tasks across the modern economy. Data Releases This repository contains multiple data releases, each with its own documentation: Labor market impacts: Job exposure and task penetration data 2026-01-15 Release: Updated analysis with economic primitives and Sonnet 4.5 2025-09-15 Release: Updated analysis with geographic and… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/EconomicIndex.
12,501
60,536
[ "language:en", "license:mit", "arxiv:2503.04761", "region:us", "AI", "LLM", "Economic Impacts", "Anthropic" ]
2025-02-06T00:39:24
null
null
678bd1db320331c7e0499ec7
nomic-ai/nomic-embed-unsupervised-data
nomic-ai
{"language": ["en"], "dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "shard", "dtype": "int64"}], "splits": [{"name": "reddit_title_body", "num_bytes": 133556530576.56786, "num_examples": 66204599}, {"name": "amazon_reviews", "num_bytes": 79397795801.44087, "num_examples": 39357860}, {"name": "paq", "num_bytes": 108682741460.16927, "num_examples": 53874545}, {"name": "s2orc_citation_titles", "num_bytes": 15578276961.267248, "num_examples": 7722225}, {"name": "s2orc_title_abstract", "num_bytes": 72727941660.31642, "num_examples": 36051582}, {"name": "s2orc_abstract_citation", "num_bytes": 15412180087.166075, "num_examples": 7639890}, {"name": "s2orc_abstract_body", "num_bytes": 13214381649.546701, "num_examples": 6550431}, {"name": "wikianswers", "num_bytes": 20349823474.661026, "num_examples": 10087503}, {"name": "wikipedia", "num_bytes": 12503510832.888903, "num_examples": 6198049}, {"name": "gooaq", "num_bytes": 2584478254.5968294, "num_examples": 1281138}, {"name": "codesearch", "num_bytes": 1743019608.3259697, "num_examples": 864023}, {"name": "yahoo_title_answer", "num_bytes": 558247690.3202951, "num_examples": 276726}, {"name": "agnews", "num_bytes": 847859634.6904019, "num_examples": 420288}, {"name": "amazonqa", "num_bytes": 456192977.6962069, "num_examples": 226137}, {"name": "yahoo_qa", "num_bytes": 289440471.31127894, "num_examples": 143477}, {"name": "yahoo_title_question", "num_bytes": 430336857.75505495, "num_examples": 213320}, {"name": "ccnews", "num_bytes": 713469137.831569, "num_examples": 353670}, {"name": "npr", "num_bytes": 736476787.666073, "num_examples": 365075}, {"name": "eli5", "num_bytes": 215412525.82009435, "num_examples": 106781}, {"name": "cnn", "num_bytes": 592128749.4145954, "num_examples": 293521}, {"name": "stackexchange_duplicate_questions", "num_bytes": 147688736.90346697, "num_examples": 73210}, {"name": "stackexchange_title_body", "num_bytes": 162788452.73084643, "num_examples": 80695}, {"name": "stackexchange_body_body", "num_bytes": 132516397.19234861, "num_examples": 65689}, {"name": "sentence_compression", "num_bytes": 350216575.3502183, "num_examples": 173604}, {"name": "wikihow", "num_bytes": 193722192.5434098, "num_examples": 96029}, {"name": "altlex", "num_bytes": 223334581.13794592, "num_examples": 110708}, {"name": "quora", "num_bytes": 90547861.71168031, "num_examples": 44885}, {"name": "simplewiki", "num_bytes": 197127445.7587226, "num_examples": 97717}, {"name": "squad", "num_bytes": 50669280.21860921, "num_examples": 25117}], "download_size": 261162378852, "dataset_size": 482138856722.99994}, "configs": [{"config_name": "default", "data_files": [{"split": "reddit_title_body", "path": "data/reddit_title_body-*"}, {"split": "amazon_reviews", "path": "data/amazon_reviews-*"}, {"split": "paq", "path": "data/paq-*"}, {"split": "s2orc_citation_titles", "path": "data/s2orc_citation_titles-*"}, {"split": "s2orc_title_abstract", "path": "data/s2orc_title_abstract-*"}, {"split": "s2orc_abstract_citation", "path": "data/s2orc_abstract_citation-*"}, {"split": "s2orc_abstract_body", "path": "data/s2orc_abstract_body-*"}, {"split": "wikianswers", "path": "data/wikianswers-*"}, {"split": "wikipedia", "path": "data/wikipedia-*"}, {"split": "gooaq", "path": "data/gooaq-*"}, {"split": "codesearch", "path": "data/codesearch-*"}, {"split": "yahoo_title_answer", "path": "data/yahoo_title_answer-*"}, {"split": "agnews", "path": "data/agnews-*"}, {"split": "amazonqa", "path": "data/amazonqa-*"}, {"split": "yahoo_qa", "path": "data/yahoo_qa-*"}, {"split": "yahoo_title_question", "path": "data/yahoo_title_question-*"}, {"split": "ccnews", "path": "data/ccnews-*"}, {"split": "npr", "path": "data/npr-*"}, {"split": "eli5", "path": "data/eli5-*"}, {"split": "cnn", "path": "data/cnn-*"}, {"split": "stackexchange_duplicate_questions", "path": "data/stackexchange_duplicate_questions-*"}, {"split": "stackexchange_title_body", "path": "data/stackexchange_title_body-*"}, {"split": "stackexchange_body_body", "path": "data/stackexchange_body_body-*"}, {"split": "sentence_compression", "path": "data/sentence_compression-*"}, {"split": "wikihow", "path": "data/wikihow-*"}, {"split": "altlex", "path": "data/altlex-*"}, {"split": "quora", "path": "data/quora-*"}, {"split": "simplewiki", "path": "data/simplewiki-*"}, {"split": "squad", "path": "data/squad-*"}]}]}
false
False
2025-01-24T22:02:10
16
9
false
917bae6ed30ebc80fc8c81ba8e3e34558205d6bb
Weakly Supervised Contrastive Training data for Text Embedding models used in Nomic Embed models Training Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data! We train our embedder using a multi-stage training pipeline. Starting from a long-context BERT model, the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora… See the full description on the dataset page: https://huggingface.co/datasets/nomic-ai/nomic-embed-unsupervised-data.
1,470
45,186
[ "language:en", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2402.01613", "region:us" ]
2025-01-18T16:07:55
null
null
699946473ccabf2d24116f0f
Roman1111111/gemini-3.1-pro-hard-high-reasoning
Roman1111111
{"license": "mit", "task_categories": ["question-answering", "text-generation", "reasoning"], "tags": ["code", "finance", "legal", "agent", "chemistry", "physics", "synthetic", "gemini-3.1-pro", "high-reasoning", "expert-level"], "size_categories": ["1k<n<10K"], "language": ["en"]}
false
False
2026-02-21T05:50:10
28
9
false
5b9be1b2b8087b748a8a36c4d47631722d3b3d8e
Dataset Card for Gemini-3.1-Pro-Ultra-Reasoning-5.6M Dataset Details Dataset Description This dataset represents the frontier of synthetic reasoning data, generated by Gemini 3.1 Pro (High Reasoning variant). While smaller in total token volume than its predecessors (5.6M tokens), this corpus prioritizes logical density and multi-step verification. The move to the 3.1 architecture provides a measurable leap in "System 2" thinking. Unlike standard models… See the full description on the dataset page: https://huggingface.co/datasets/Roman1111111/gemini-3.1-pro-hard-high-reasoning.
507
507
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:mit", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "code", "finance", "legal", "ag...
2026-02-21T05:44:39
null
null
69a52fb3ff95a38fe27d886f
TianHongZXY/CHIMERA
TianHongZXY
{"language": ["en"], "pretty_name": "CHIMERA", "tags": ["reasoning", "chain-of-thought", "synthetic-data", "llm", "stem", "post-training"], "license": "apache-2.0", "task_categories": ["text-generation", "question-answering"], "size_categories": ["1K<n<10K"], "annotations_creators": ["machine-generated"], "configs": [{"config_name": "Qwen3-235B-2507", "default": true, "data_files": [{"split": "train", "path": "Qwen3-235B-2507/train-*.parquet"}]}, {"config_name": "Qwen3.5-397B", "data_files": [{"split": "train", "path": "Qwen3.5-397B/train-*.parquet"}]}]}
false
False
2026-03-11T04:38:56
19
9
false
d6a22de2d5a51eb8f1ac1edd6ffde4d791bd0f65
CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning CHIMERA is a compact but high-difficulty synthetic reasoning datasetwith long Chain-of-Thought (CoT) trajectories and broad STEM coverage, designed for reasoning post-training. All examples are fully LLM-generated and automatically verified without human annotation. Total: 9,225 problems Subjects: 8 Topics: 1,179 🔥 Why CHIMERA? Recent reasoning advances rely heavily on high-quality… See the full description on the dataset page: https://huggingface.co/datasets/TianHongZXY/CHIMERA.
906
906
[ "task_categories:text-generation", "task_categories:question-answering", "annotations_creators:machine-generated", "language:en", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "library:dask", "library:pola...
2026-03-02T06:35:31
null
null
69ae70be8e488f74a57a6010
DataPilot/AItuber-Personas-Japan
DataPilot
{"license": "odc-by", "language": ["ja"], "tags": ["synthetic"], "pretty_name": "sdg-nexus", "size_categories": ["n<1K"]}
false
False
2026-03-14T12:28:17
9
9
false
d2563a4cc8d5d847c6420959ee6691fc45b97eb8
AItuber Persona Dataset 概要 本データセットは、AItuber(AI VTuber)のペルソナ設計に必要な コンセプト設計書・実装用システムプロンプト・配信テーマリスト の3点セットを、LLMを用いて合成的に生成したものです。多様なジャンル・性格・ビジュアルの組み合わせから、即座に実運用可能な品質のAItuberキャラクターデータを提供します。 生成にはSDG-LOOMという合成データ生成パイプラインを用いました。(sdg-loom) データの説明 項目 内容 件数 195件 形式 JSONL(1行1JSON) 言語 日本語 生成日 2026年3月 ライセンス odc-by ( Open Data Commons Attribution License )… See the full description on the dataset page: https://huggingface.co/datasets/DataPilot/AItuber-Personas-Japan.
27
27
[ "language:ja", "license:odc-by", "size_categories:n<1K", "region:us", "synthetic" ]
2026-03-09T07:03:26
null
null
69b104fbc06491b1f9915fff
KaLM-Embedding/LMEB
KaLM-Embedding
{"license": "mit", "language": ["en"], "tags": ["long-horizon", "memory", "embedding", "benchmark", "openclaw", "lmeb", "mteb"], "size_categories": ["100K<n<1M"], "task_categories": ["feature-extraction"], "modalities": ["Text"], "configs": [{"config_name": "default", "data_files": "Dialogue/LoCoMo/single_hop/queries.jsonl"}], "pretty_name": "LMEB"}
false
False
2026-03-15T12:20:45
9
9
false
f137e843ba4b9439d554a8814647fd9bb62526ee
🌟 LMEB: Long-horizon Memory Embedding Benchmark 🌟 Welcome to the Long-horizon Memory Embedding Benchmark (LMEB)! Unlike existing text embedding benchmarks that narrowly focus on passage retrieval, LLMEB is designed to evaluate embedding models' ability to handle complex, long-horizon memory retrieval tasks, focusing on fragmented, context-dependent, and temporally distant information. LMEB spans 22 diverse datasets and 193 retrieval tasks, across 4 memory types: 📅 Episodic… See the full description on the dataset page: https://huggingface.co/datasets/KaLM-Embedding/LMEB.
0
0
[ "task_categories:feature-extraction", "language:en", "license:mit", "size_categories:100K<n<1M", "region:us", "long-horizon", "memory", "embedding", "benchmark", "openclaw", "lmeb", "mteb" ]
2026-03-11T06:00:27
null
null
69b186f91cde8c71bb8f76b0
Roman1111111/claude-opus-4.6-10000x
Roman1111111
{"license": "mit"}
false
False
2026-03-11T16:00:39
9
9
false
3fedde0a6ac508eb255151c9d00e5a37e2f3f16a
This is a high-fidelity reasoning dataset synthesized using Claude Opus 4.6. The dataset is designed to capture the model's internal "Chain of Thought" and reasoning traces, specifically focusing on mathematical accuracy and structured logical deduction. The dataset is intended for Supervised Fine-Tuning (SFT) and Distillation, allowing smaller open-source models to inherit the sophisticated reasoning patterns of Claude Opus 4.6. Dataset Description This collection combines high-difficulty… See the full description on the dataset page: https://huggingface.co/datasets/Roman1111111/claude-opus-4.6-10000x.
339
339
[ "license:mit", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-03-11T15:15:05
null
null
6662f7cd2b8a3cd48ea74f41
lmms-lab/Video-MME
lmms-lab
{"dataset_info": {"config_name": "videomme", "features": [{"name": "video_id", "dtype": "string"}, {"name": "duration", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "sub_category", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "videoID", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "task_type", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1003241, "num_examples": 2700}], "download_size": 405167, "dataset_size": 1003241}, "configs": [{"config_name": "videomme", "data_files": [{"split": "test", "path": "videomme/test-*"}]}]}
false
False
2024-07-04T08:14:20
75
8
false
ead1408f75b618502df9a1d8e0950166bf0a2a0b
null
67,188
544,835
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2024-06-07T12:06:37
null
null
67d305619f485955bf117049
nvidia/HelpSteer3
nvidia
{"license": "cc-by-4.0", "language": ["en", "zh", "ko", "fr", "es", "ru", "ja", "de", "it", "pt", "pl", "id", "nl", "vi"], "pretty_name": "HelpSteer3", "size_categories": ["10K<n<100K"], "tags": ["human-feedback", "reinforcement-learning"], "configs": [{"config_name": "preference", "default": true, "data_files": [{"split": "train", "path": "preference/train.jsonl.gz"}, {"split": "validation", "path": "preference/validation.jsonl.gz"}]}, {"config_name": "feedback", "data_files": [{"split": "train", "path": "feedback/train.jsonl.gz"}, {"split": "validation", "path": "feedback/validation.jsonl.gz"}]}, {"config_name": "edit", "data_files": [{"split": "train", "path": "edit/train.jsonl.gz"}, {"split": "validation", "path": "edit/validation.jsonl.gz"}]}, {"config_name": "edit_quality", "data_files": [{"split": "train", "path": "edit_quality/train.jsonl.gz"}, {"split": "validation", "path": "edit_quality/validation.jsonl.gz"}]}, {"config_name": "principle", "data_files": [{"split": "train", "path": "principle/train.jsonl.gz"}, {"split": "validation", "path": "principle/validation.jsonl.gz"}]}]}
false
False
2025-11-16T07:18:00
105
8
false
f6d145777bcbde96137596340fab89793acd1031
HelpSteer3 HelpSteer3 is an open-source dataset (CC-BY-4.0) that supports aligning models to become more helpful in responding to user prompts. HelpSteer3-Preference can be used to train Llama 3.3 Nemotron Super 49B v1 (for Generative RMs) and Llama 3.3 70B Instruct Models (for Bradley-Terry RMs) to produce Reward Models that score as high as 85.5% on RM-Bench and 78.6% on JudgeBench, which substantially surpass existing Reward Models on these benchmarks. HelpSteer3-Feedback and… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/HelpSteer3.
4,650
35,400
[ "language:en", "language:zh", "language:ko", "language:fr", "language:es", "language:ru", "language:ja", "language:de", "language:it", "language:pt", "language:pl", "language:id", "language:nl", "language:vi", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:json", "modali...
2025-03-13T16:18:41
null
null
67d45c3d35fc7f6d2ab224c8
allenai/olmOCR-bench
allenai
{"license": "odc-by", "tags": ["text"], "configs": [{"config_name": "olmocr-bench", "data_files": [{"split": "arxiv_math", "path": ["bench_data/arxiv_math.jsonl"]}, {"split": "headers_footers", "path": ["bench_data/headers_footers.jsonl"]}, {"split": "long_tiny_text", "path": ["bench_data/long_tiny_text.jsonl"]}, {"split": "multi_column", "path": ["bench_data/multi_column.jsonl"]}, {"split": "old_scans", "path": ["bench_data/old_scans.jsonl"]}, {"split": "old_scans_math", "path": ["bench_data/old_scans_math.jsonl"]}, {"split": "table_tests", "path": ["bench_data/table_tests.jsonl"]}]}], "language": ["en"], "pretty_name": "olmOCR-bench", "size_categories": ["1K<n<10K"]}
false
False
2026-02-19T17:28:38
121
8
false
54a96a6fb6a2bd3b297e59869491db4d3625b711
olmOCR-bench olmOCR-bench is a dataset of 1,403 PDF files, plus 7,010 unit test cases that capture properties of the output that a good OCR system should have. This benchmark evaluates the ability of OCR systems to accurately convert PDF documents to markdown format while preserving critical textual and structural information. Quick links: 📃 Paper 🛠️ Code 🎮 Demo Table 1. Distribution of Test Classes by Document Source Document Source Text Present Text… See the full description on the dataset page: https://huggingface.co/datasets/allenai/olmOCR-bench.
2,402
32,850
[ "benchmark:official", "benchmark:eval-yaml", "language:en", "license:odc-by", "size_categories:1K<n<10K", "modality:document", "modality:text", "arxiv:2502.18443", "region:us", "text" ]
2025-03-14T16:41:33
null
null
689d7cdd5219881b53bd55f3
nvidia/Nemotron-Pretraining-Dataset-sample
nvidia
{"license": "other", "configs": [{"config_name": "Nemotron-CC-MATH", "data_files": [{"path": "Nemotron-CC-MATH/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-CC-High-Quality", "data_files": [{"path": "Nemotron-CC-High-Quality/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-CC-High-Quality-Synthetic", "data_files": [{"path": "Nemotron-CC-High-Quality-Synthetic/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-CC-Diverse-QA", "data_files": [{"path": "Nemotron-CC-Diverse-QA/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-CC-Translated-Diverse-QA", "data_files": [{"path": "Nemotron-CC-Translated-Diverse-QA/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-Synthetic-Code", "data_files": [{"path": "Nemotron-Synthetic-Code/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-SFT-Code", "data_files": [{"path": "Nemotron-SFT-Code/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-SFT-General", "data_files": [{"path": "Nemotron-SFT-General/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-SFT-MATH", "data_files": [{"path": "Nemotron-SFT-MATH/*.parquet", "split": "train"}]}, {"config_name": "Nemotron-Code-Metadata", "data_files": [{"path": "Nemotron-Code-Metadata/*.parquet", "split": "train"}]}], "track_downloads": true}
false
False
2025-12-22T17:07:37
50
8
false
3ad096e6394e487bb4f778733300da85275bb449
Nemotron-Pre-Training-Dataset-v1 Release Data Overview This pretraining dataset, for generative AI model training, preserves high-value math and code while enriching it with diverse multilingual Q&A, fueling the next generation of intelligent, globally-capable models. This dataset supports NVIDIA Nemotron Nano 2, a family of large language models (LLMs) that consists of the NVIDIA-Nemotron-Nano-9B-v2, NVIDIA-Nemotron-Nano-9B-v2-Base, and NVIDIA-Nemotron-Nano-12B-v2-Base… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Dataset-sample.
777
7,529
[ "license:other", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2508.14444", "region:us" ]
2025-08-14T06:06:21
null
null
693e2682c9d7af74f71b3e5f
nvidia/Nemotron-Agentic-v1
nvidia
{"license": "cc-by-4.0", "language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "interactive_agent", "path": "data/interactive_agent.jsonl"}, {"split": "tool_calling", "path": "data/tool_calling.jsonl"}]}]}
false
False
2025-12-15T13:48:35
156
8
false
650d590978ca35c8f1ecea2faf136e5fac421b62
Dataset Description: The Nemotron-Agentic-Tool-Use-v1 dataset is designed to strengthen models’ capabilities as interactive, tool-using agents. It focuses on multi-turn conversations where language models decompose user goals, decide when to call tools, and reason over tool outputs to complete tasks reliably and safely. This dataset is ready for commercial use. The Nemotron-Agentic-Tool-Use-v1 dataset contains the following subsets: Interactive Agent This dataset… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Agentic-v1.
889
4,636
[ "language:en", "license:cc-by-4.0", "region:us" ]
2025-12-14T02:52:50
null
null
698dd2570db46090757245bc
markov-ai/computer-use
markov-ai
{"license": "apache-2.0", "task_categories": ["robotics", "image-to-text"], "tags": ["computer-use", "gui-agent", "osworld", "trajectories", "reinforcement-learning"], "size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*.parquet"}]}]}
false
False
2026-02-13T15:11:21
59
8
false
de58c88b4b33dd03fa4d5d0f490748f576bd37b3
Computer Use Trajectories Successful computer-use agent trajectories collected on OSWorld tasks. Dataset Details Rows: 160 (one per task trajectory) Steps: 1,378 total across all trajectories (avg ~8.6 steps/task) Agent: Gemini 3 Flash Preview with linearized accessibility-tree grounding Score filter: Only trajectories with score = 1.0 (fully successful) Domains Domain Tasks Description chrome 21 Web browsing tasks in Google Chrome gimp 15 Image… See the full description on the dataset page: https://huggingface.co/datasets/markov-ai/computer-use.
905
961
[ "task_categories:robotics", "task_categories:image-to-text", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "format:optimized-parquet", "modality:image", "modality:text", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:polars", "librar...
2026-02-12T13:15:03
null
null
6996a0f665f352f44ec11a37
Roman1111111/gemini-3-pro-10000x-hard-high-reasoning
Roman1111111
{"license": "mit", "task_categories": ["question-answering", "text-generation", "reasoning"], "tags": ["code", "finance", "legal", "agent", "chemistry", "art", "synthetic", "gemini-3-pro", "hard-reasoning", "mathematics", "physics"], "size_categories": ["10K<n<100K"], "language": ["en"]}
false
False
2026-02-20T03:49:27
42
8
false
5feedf31aaa6ff0ae0ee1bc8a169bc6bfaccbd5a
Dataset Card for Gemini-3-Pro-Reasoning-10000x-high-reasoning Dataset Details Dataset Description Suggestion: I would use it to fine tune glm- 4.7-flash, or other 30b moe models, but 2-20b llms work perfectly, you can fine tune Nanbeige 4.1 - 3b, gpt-oss:20b, or qwen3: 4b, 8b(note: better to fine tune newest versions(2507 4b qwen3 , or qwen 3 vl:8b)) for maximum improvement. This dataset is a high-complexity synthetic reasoning corpus containing… See the full description on the dataset page: https://huggingface.co/datasets/Roman1111111/gemini-3-pro-10000x-hard-high-reasoning.
989
989
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:mit", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us", "code", "finance", "legal", "...
2026-02-19T05:34:46
null
null
69ae7132f939066a47e28bb8
humanlaya-data-lab/OneMillion-Bench
humanlaya-data-lab
{"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en", "zh"], "tags": ["economics_and_finance", "healthcare_and_medicine", "industry", "law", "natural_science"], "pretty_name": "$OneMillion-Bench", "size_categories": ["n<1K"]}
false
False
2026-03-11T06:34:22
8
8
false
5cf9d5005e2e1f20b4481ed50846161697e82a73
$OneMillion-Bench A bilingual (Global/Chinese) realistic expert-level benchmark for evaluating language agents across 5 professional domains. The benchmark contains 400 entries with detailed, weighted rubric-based grading criteria designed for fine-grained evaluation of domain expertise, analytical reasoning, and instruction following. Dataset Structure Each subdirectory is a Hugging Face subset (configuration), and all data is in the test split. $OneMillion-Bench/ ├──… See the full description on the dataset page: https://huggingface.co/datasets/humanlaya-data-lab/OneMillion-Bench.
218
218
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "language:zh", "license:apache-2.0", "size_categories:n<1K", "modality:text", "arxiv:2603.07980", "region:us", "economics_and_finance", "healthcare_and_medicine", "industry", "law", "natural_science" ]
2026-03-09T07:05:22
null
null
69b47da50db2f0b674627622
yatin-superintelligence/Adversarial-Agent-Intent-Safety-Analysis-240K
yatin-superintelligence
{"pretty_name": "Adversarial Agent Intent Safety Analysis 240K", "license": "openrail", "language": ["en"], "library_name": "datasets", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "text-generation", "question-answering", "reinforcement-learning", "robotics"], "tags": ["agent", "text", "safety", "jailbreak", "alignment", "trust", "digital-arrest", "robotics", "reinforcement-learning", "surveillance", "synthetic", "adversarial", "intent-analysis", "red-teaming", "guardrails", "dual-use", "classification", "agentic", "reasoning", "system-2", "chain-of-thought", "cybersecurity", "malware", "hacker", "document", "tool-use", "software", "engineering", "code", "legal", "medical", "healthcare", "biology", "chemistry", "finance", "science", "datasets", "parquet", "pandas", "polars", "dask"], "extra_gated_prompt": "Please complete this form to request access to the Adversarial Agent Intent Safety Analysis 240K dataset. This dataset is released for AI safety research, red-teaming, and responsible model development.", "extra_gated_fields": {"Full Name": "text", "Email": "text", "Organization / Institution / Company": "text", "Academic or Commercial Use": {"type": "select", "options": ["Academic / Research", "Commercial", "Personal / Non-commercial", "Government / Policy"]}, "Country": "country"}, "dataset_info": {"features": [{"name": "batch_index", "dtype": "int64"}, {"name": "mode", "dtype": "string"}, {"name": "sophistication", "dtype": "string"}, {"name": "risk_level", "dtype": "string"}, {"name": "adversarial_prompt", "dtype": "string"}, {"name": "surface_interpretation", "dtype": "string"}, {"name": "intent_analysis", "dtype": "string"}, {"name": "clarifying_questions", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 242454}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "adversarial_intent_safety_*.parquet"}]}]}
false
auto
2026-03-15T13:06:29
8
8
false
dc44c76a1b831fee8ee07cf1a9bace762b0af3a4
Adversarial Agent Intent Safety Analysis 240K Abstract The Adversarial-Agent-Intent-Safety-Analysis-240K is a deterministically structured dataset featuring 242,454 context-rich adversarial prompts and safety evaluations. Engineered strictly for training frontier command-and-control models, guardrail classifiers, and red-teaming agents, it encourages models to parse multi-layered intention across 126 critical risk vectors. This design trains models to decouple the surface… See the full description on the dataset page: https://huggingface.co/datasets/yatin-superintelligence/Adversarial-Agent-Intent-Safety-Analysis-240K.
26
26
[ "task_categories:text-classification", "task_categories:text-generation", "task_categories:question-answering", "task_categories:reinforcement-learning", "task_categories:robotics", "language:en", "license:openrail", "size_categories:100K<n<1M", "format:parquet", "modality:text", "modality:docum...
2026-03-13T21:12:05
null
null
End of preview. Expand in Data Studio

Changelog

NEW Changes March 11th 2026

  • Added new split: arxiv_papers, sourced from the Hugging Face /api/papers endpoint
  • papers continues to point to daily_papers.parquet, which is the Daily Papers feed

NEW Changes July 25th

  • added baseModels field to models which shows the models that the user tagged as base models for that model

Example:

{
  "models": [
    {
      "_id": "687de260234339fed21e768a",
      "id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
    }
  ],
  "relation": "quantized"
}

NEW Changes July 9th

  • Fixed issue with gguf column with integer overflow causing import pipeline to be broken over a few weeks ✅

NEW Changes Feb 27th

  • Added new fields on the models split: downloadsAllTime, safetensors, gguf

  • Added new field on the datasets split: downloadsAllTime

  • Added new split: papers which is all of the Daily Papers

Updated Daily

Downloads last month
8,421

Spaces using cfahlgren1/hub-stats 15