datasetId
large_stringlengths
6
107
author
large_stringlengths
3
34
last_modified
large_stringdate
2021-05-20 00:57:22
2025-05-05 14:13:47
downloads
int64
0
4.28M
likes
int64
0
7.74k
tags
large listlengths
1
2.03k
task_categories
large listlengths
0
16
createdAt
large_stringdate
2022-03-02 23:29:22
2025-05-05 14:04:15
trending_score
float64
1
39
card
large_stringlengths
31
1M
Qika/xray_stage3
Qika
2025-05-05T07:03:50Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-05T06:58:05Z
null
--- dataset_info: features: - name: images sequence: image - name: problem dtype: string - name: answer dtype: string splits: - name: train num_bytes: 189960810848.1 num_examples: 4140 - name: test num_bytes: 21230199672.3 num_examples: 460 download_size: 223217163 dataset_size: 211191010520.4 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
mlfoundations-dev/openthoughts2_math_1k
mlfoundations-dev
2025-05-05T04:00:31Z
0
0
[ "region:us" ]
[]
2025-05-05T04:00:29Z
null
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string - name: _domain dtype: string - name: system dtype: string - name: problem dtype: string - name: reasoning dtype: string - name: deepseek_solution dtype: string - name: question dtype: string - name: source dtype: string - name: id dtype: int64 - name: extracted_instruction dtype: string splits: - name: train num_bytes: 14701567.995440913 num_examples: 1000 download_size: 6381529 dataset_size: 14701567.995440913 configs: - config_name: default data_files: - split: train path: data/train-* ---
mih12345/italian_clean_5th_may_v1
mih12345
2025-05-05T02:42:42Z
0
0
[ "license:mit", "region:us" ]
[]
2025-05-05T02:15:22Z
null
--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: filename dtype: string - name: transcription dtype: string - name: audio dtype: audio splits: - name: train num_bytes: 620294074.844 num_examples: 1223 download_size: 566398642 dataset_size: 620294074.844 ---
javasoup/koch_test_2025_1
javasoup
2025-05-05T01:45:06Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "tutorial" ]
[ "robotics" ]
2025-05-05T01:44:56Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "koch", "total_episodes": 2, "total_frames": 1774, "total_tasks": 1, "total_videos": 4, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:2" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
dongseon/so100_0505
dongseon
2025-05-05T01:34:47Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2025-05-05T01:34:32Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 10, "total_frames": 5632, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
dushj98/waikato_aerial_imagery_2017_7cls
dushj98
2025-05-05T00:42:47Z
0
0
[ "region:us" ]
[]
2025-05-05T00:42:11Z
null
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': deciduous_hardwood '1': harvested_forest '2': high_producing_grassland '3': indigenous_forest '4': lake_pond '5': shortrotation_cropland '6': urban_build_up splits: - name: train num_bytes: 385213214.82 num_examples: 4662 - name: validation num_bytes: 192453335.532 num_examples: 2338 download_size: 577723078 dataset_size: 577666550.352 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
rainbowbridge/x_dataset_15977
rainbowbridge
2025-05-05T00:39:48Z
1,093
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-29T02:44:14Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** rainbowbridge/x_dataset_15977 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5DfHeJeLJRLeMNMaatPDfKYJDzXGCN7tDcxPrGRzeNgfCucD ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{rainbowbridge2025datauniversex_dataset_15977, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={rainbowbridge}, year={2025}, url={https://huggingface.co/datasets/rainbowbridge/x_dataset_15977}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 40599220 - **Date Range:** 2025-01-23T00:00:00Z to 2025-02-13T00:00:00Z - **Last Updated:** 2025-02-18T20:52:28Z ### Data Distribution - Tweets with hashtags: 48.22% - Tweets without hashtags: 51.78% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 21021404 | 51.78% | | 2 | #riyadh | 328678 | 0.81% | | 3 | #zelena | 271129 | 0.67% | | 4 | #tiktok | 184793 | 0.46% | | 5 | #jhope_at_galadespiècesjaunes | 155793 | 0.38% | | 6 | #bbb25 | 121789 | 0.30% | | 7 | #ad | 108287 | 0.27% | | 8 | #bbmzansi | 62585 | 0.15% | | 9 | #grandefratello | 58608 | 0.14% | | 10 | #pr | 56638 | 0.14% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-29T02:45:06Z | 2152001 | 2152001 | | 2025-02-01T14:47:40Z | 8070361 | 10222362 | | 2025-02-05T02:50:45Z | 9239941 | 19462303 | | 2025-02-08T14:54:26Z | 10767494 | 30229797 | | 2025-02-12T03:00:46Z | 8737385 | 38967182 | | 2025-02-18T05:51:19Z | 808942 | 39776124 | | 2025-02-18T20:52:28Z | 823096 | 40599220 |
HungVu2003/opt-350m_beta_1.0_alpha_0.8_num-company_3_dataset_2_for_gen_4
HungVu2003
2025-05-05T00:23:15Z
0
0
[ "region:us" ]
[]
2025-05-05T00:23:13Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 4241647 num_examples: 12498 download_size: 1773113 dataset_size: 4241647 configs: - config_name: default data_files: - split: train path: data/train-* ---
gokulp01/ae598_main
gokulp01
2025-05-04T23:14:06Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2025-05-04T23:02:40Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 1, "total_frames": 17, "total_tasks": 1, "total_videos": 2, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_1_for_gen_1_v2
HungVu2003
2025-05-04T22:36:10Z
0
0
[ "region:us" ]
[]
2025-05-04T22:36:08Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 6163003 num_examples: 15000 download_size: 3170274 dataset_size: 6163003 configs: - config_name: default data_files: - split: train path: data/train-* ---
Odog16/eval_act_lekiwi_test_4.2
Odog16
2025-05-04T22:28:14Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "tutorial" ]
[ "robotics" ]
2025-05-04T22:27:17Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "lekiwi", "total_episodes": 10, "total_frames": 7945, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 9 ], "names": [ "shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper", "x_mm", "y_mm", "theta" ] }, "observation.state": { "dtype": "float32", "shape": [ 9 ], "names": [ "shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper", "x_mm", "y_mm", "theta" ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
osama24sy/llama3.1-8b-it-10k-qwen-singleturn-onesolution-r64-MODEL-countdown-results
osama24sy
2025-05-04T22:02:39Z
0
0
[ "region:us" ]
[]
2025-05-04T22:02:38Z
null
--- dataset_info: features: - name: index dtype: int64 - name: numbers sequence: int64 - name: target dtype: int64 - name: operations sequence: sequence: string - name: response dtype: string - name: token_count dtype: int64 splits: - name: train num_bytes: 204506 num_examples: 150 download_size: 82984 dataset_size: 204506 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_13_quality_metadata_description
SayantanJoker
2025-05-04T20:45:51Z
0
0
[ "region:us" ]
[]
2025-05-04T20:13:26Z
null
--- dataset_info: features: - name: text dtype: string - name: file_name dtype: string - name: utterance_pitch_mean dtype: float32 - name: utterance_pitch_std dtype: float32 - name: snr dtype: float64 - name: c50 dtype: float64 - name: speaking_rate dtype: string - name: phonemes dtype: string - name: stoi dtype: float64 - name: si-sdr dtype: float64 - name: pesq dtype: float64 - name: noise dtype: string - name: reverberation dtype: string - name: speech_monotony dtype: string - name: sdr_noise dtype: string - name: pesq_speech_quality dtype: string - name: text_description dtype: string splits: - name: train num_bytes: 29801454 num_examples: 49807 download_size: 9403220 dataset_size: 29801454 configs: - config_name: default data_files: - split: train path: data/train-* ---
SeaLLMs/TrueFalse-Statements-multilingual
SeaLLMs
2025-05-04T20:23:41Z
0
0
[ "region:us" ]
[]
2025-05-04T20:23:39Z
null
--- dataset_info: features: - name: statements dtype: string - name: true/false dtype: bool - name: category dtype: string - name: language dtype: string splits: - name: test num_bytes: 4727153 num_examples: 48680 download_size: 1369810 dataset_size: 4727153 configs: - config_name: default data_files: - split: test path: data/test-* ---
BasedLukas/so101_test_3
BasedLukas
2025-05-04T19:11:08Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "so101", "tutorial" ]
[ "robotics" ]
2025-05-04T19:10:48Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so101 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so101", "total_episodes": 1, "total_frames": 894, "total_tasks": 1, "total_videos": 2, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
jumava/adv-ele
jumava
2025-05-04T18:02:14Z
0
0
[ "region:us" ]
[]
2025-05-04T18:02:12Z
null
--- dataset_info: features: - name: ADV dtype: string - name: ELE dtype: string splits: - name: train num_bytes: 430918.56140350876 num_examples: 1732 - name: test num_bytes: 107978.43859649122 num_examples: 434 download_size: 296569 dataset_size: 538897.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
rricc22/so100_record50
rricc22
2025-05-04T17:55:16Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2025-05-04T17:54:43Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 50, "total_frames": 50542, "total_tasks": 1, "total_videos": 50, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
mteb/wit
mteb
2025-05-04T16:12:14Z
31
0
[ "task_categories:visual-document-retrieval", "task_categories:image-to-text", "task_categories:text-to-image", "annotations_creators:derived", "multilinguality:multilingual", "language:ara", "language:bul", "language:dan", "language:ell", "language:eng", "language:est", "language:ind", "language:jpn", "language:kor", "language:tur", "language:vie", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text", "image" ]
[ "visual-document-retrieval", "image-to-text", "text-to-image" ]
2024-08-19T18:37:23Z
null
--- annotations_creators: - derived language: - ara - bul - dan - ell - eng - est - ind - jpn - kor - tur - vie license: cc-by-sa-4.0 multilinguality: multilingual task_categories: - visual-document-retrieval - image-to-text - text-to-image task_ids: [] dataset_info: features: - name: image_id dtype: string - name: image dtype: image - name: captions sequence: string splits: - name: ar num_bytes: 29202097.0 num_examples: 792 - name: bg num_bytes: 31279024.0 num_examples: 806 - name: da num_bytes: 30944912.0 num_examples: 814 - name: el num_bytes: 20853656.0 num_examples: 541 - name: et num_bytes: 29836255.0 num_examples: 780 - name: id num_bytes: 31800790.0 num_examples: 854 - name: ja num_bytes: 31506160.0 num_examples: 842 - name: ko num_bytes: 32797718.0 num_examples: 889 - name: tr num_bytes: 25512406.0 num_examples: 681 - name: vi num_bytes: 31996924.0 num_examples: 869 - name: en num_bytes: 25444712.0 num_examples: 685 download_size: 320979492 dataset_size: 321174654.0 configs: - config_name: default data_files: - split: ar path: data/ar-* - split: bg path: data/bg-* - split: da path: data/da-* - split: el path: data/el-* - split: et path: data/et-* - split: id path: data/id-* - split: ja path: data/ja-* - split: ko path: data/ko-* - split: tr path: data/tr-* - split: vi path: data/vi-* - split: en path: data/en-* tags: - mteb - text - image --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">WITT2IRetrieval</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> Retrieve images based on multilingual descriptions. | | | |---------------|---------------------------------------------| | Task category | t2i | | Domains | Encyclopaedic, Written | | Reference | https://proceedings.mlr.press/v162/bugliarello22a/bugliarello22a.pdf | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["WITT2IRetrieval"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{bugliarello2022iglue, author = {Bugliarello, Emanuele and Liu, Fangyu and Pfeiffer, Jonas and Reddy, Siva and Elliott, Desmond and Ponti, Edoardo Maria and Vuli{\'c}, Ivan}, booktitle = {International Conference on Machine Learning}, organization = {PMLR}, pages = {2370--2392}, title = {IGLUE: A benchmark for transfer learning across modalities, tasks, and languages}, year = {2022}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("WITT2IRetrieval") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "number_of_characters": 506601, "num_samples": 18137, "num_queries": 9584, "num_documents": 8553, "min_document_length": 0, "average_document_length": 0, "max_document_length": 0, "unique_documents": 0, "num_document_images": 8553, "min_query_length": 9, "average_query_length": 52.85903589315526, "max_query_length": 779, "unique_queries": 9076, "num_query_images": 0, "min_relevant_docs_per_query": 1, "average_relevant_docs_per_query": 1.0, "max_relevant_docs_per_query": 1, "unique_relevant_docs": 8553 } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/Touche2020-PL
mteb
2025-05-04T16:11:31Z
18
0
[ "task_categories:text-retrieval", "task_ids:multiple-choice-qa", "annotations_creators:derived", "multilinguality:translated", "source_datasets:mteb/touche2020", "language:pol", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2305.19840", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-retrieval" ]
2025-02-07T16:35:00Z
null
--- annotations_creators: - derived language: - pol license: unknown multilinguality: translated source_datasets: - mteb/touche2020 task_categories: - text-retrieval task_ids: - multiple-choice-qa dataset_info: - config_name: corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 744260994 num_examples: 382545 download_size: 417361577 dataset_size: 744260994 - config_name: default features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 125677 num_examples: 2214 download_size: 41295 dataset_size: 125677 - config_name: queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 3260 num_examples: 49 download_size: 3805 dataset_size: 3260 configs: - config_name: corpus data_files: - split: test path: corpus/test-* - config_name: default data_files: - split: test path: data/test-* - config_name: queries data_files: - split: test path: queries/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">Touche2020-PL</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> Touché Task 1: Argument Retrieval for Controversial Questions | | | |---------------|---------------------------------------------| | Task category | t2t | | Domains | Academic | | Reference | https://webis.de/events/touche-20/shared-task-1.html | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["Touche2020-PL"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{wojtasik2024beirpl, archiveprefix = {arXiv}, author = {Konrad Wojtasik and Vadim Shishkin and Kacper Wołowiec and Arkadiusz Janz and Maciej Piasecki}, eprint = {2305.19840}, primaryclass = {cs.IR}, title = {BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language}, year = {2024}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("Touche2020-PL") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 382594, "number_of_characters": 680238138, "num_documents": 382545, "min_document_length": 3, "average_document_length": 1778.1842371485707, "max_document_length": 106110, "unique_documents": 382545, "num_queries": 49, "min_query_length": 18, "average_query_length": 54.06122448979592, "max_query_length": 96, "unique_queries": 49, "none_queries": 0, "num_relevant_docs": 2214, "min_relevant_docs_per_query": 40, "average_relevant_docs_per_query": 19.020408163265305, "max_relevant_docs_per_query": 52, "unique_relevant_docs": 2099, "num_instructions": null, "min_instruction_length": null, "average_instruction_length": null, "max_instruction_length": null, "unique_instructions": null, "num_top_ranked": null, "min_top_ranked_per_query": null, "average_top_ranked_per_query": null, "max_top_ranked_per_query": null } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/NQ_test_top_250_only_w_correct-v2
mteb
2025-05-04T16:10:29Z
174
0
[ "task_categories:text-retrieval", "multilinguality:monolingual", "source_datasets:mteb/nq", "language:eng", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-retrieval" ]
2024-09-28T05:31:19Z
null
--- language: - eng multilinguality: monolingual source_datasets: - mteb/nq task_categories: - text-retrieval task_ids: [] dataset_info: - config_name: corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 102405421.72767939 num_examples: 198779 download_size: 76290132 dataset_size: 102405421.72767939 - config_name: default features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 38495.78647940967 num_examples: 1213 download_size: 16497 dataset_size: 38495.78647940967 - config_name: queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 63867.90266512167 num_examples: 1000 download_size: 40924 dataset_size: 63867.90266512167 configs: - config_name: corpus data_files: - split: test path: corpus/test-* - config_name: default data_files: - split: test path: data/test-* - config_name: queries data_files: - split: test path: queries/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NQHardNegatives</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> NFCorpus: A Full-Text Learning to Rank Dataset for Medical Information Retrieval. The hard negative version has been created by pooling the 250 top documents per query from BM25, e5-multilingual-large and e5-mistral-instruct. | | | |---------------|---------------------------------------------| | Task category | t2t | | Domains | None | | Reference | https://ai.google.com/research/NaturalQuestions/ | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["NQHardNegatives"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @article{47761, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, journal = {Transactions of the Association of Computational Linguistics}, title = {Natural Questions: a Benchmark for Question Answering Research}, year = {2019}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("NQHardNegatives") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 199779, "number_of_characters": 120068721, "num_documents": 198779, "min_document_length": 5, "average_document_length": 603.7903551179953, "max_document_length": 17008, "unique_documents": 198779, "num_queries": 1000, "min_query_length": 29, "average_query_length": 47.878, "max_query_length": 94, "unique_queries": 1000, "none_queries": 0, "num_relevant_docs": 1213, "min_relevant_docs_per_query": 1, "average_relevant_docs_per_query": 1.213, "max_relevant_docs_per_query": 4, "unique_relevant_docs": 1213, "num_instructions": null, "min_instruction_length": null, "average_instruction_length": null, "max_instruction_length": null, "unique_instructions": null, "num_top_ranked": null, "min_top_ranked_per_query": null, "average_top_ranked_per_query": null, "max_top_ranked_per_query": null } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/nfcorpus
mteb
2025-05-04T16:10:27Z
3,686
2
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:eng", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-retrieval" ]
2024-03-02T21:17:27Z
null
--- language: - eng multilinguality: monolingual task_categories: - text-retrieval task_ids: [] config_names: - corpus tags: - mteb - text dataset_info: - config_name: default features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: float64 splits: - name: train num_bytes: 3720942 num_examples: 110575 - name: dev num_bytes: 383427 num_examples: 11385 - name: test num_bytes: 415220 num_examples: 12334 - config_name: corpus features: - name: _id dtype: string - name: title dtype: string - name: text dtype: string splits: - name: corpus num_bytes: 5856698 num_examples: 3633 - config_name: queries features: - name: _id dtype: string - name: text dtype: string splits: - name: queries num_bytes: 128355 num_examples: 3237 configs: - config_name: default data_files: - split: train path: qrels/train.jsonl - split: dev path: qrels/dev.jsonl - split: test path: qrels/test.jsonl - config_name: corpus data_files: - split: corpus path: corpus.jsonl - config_name: queries data_files: - split: queries path: queries.jsonl --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NFCorpus</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> NFCorpus: A Full-Text Learning to Rank Dataset for Medical Information Retrieval | | | |---------------|---------------------------------------------| | Task category | t2t | | Domains | Medical, Academic, Written | | Reference | https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/ | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["NFCorpus"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{boteva2016, author = {Boteva, Vera and Gholipour, Demian and Sokolov, Artem and Riezler, Stefan}, city = {Padova}, country = {Italy}, journal = {Proceedings of the 38th European Conference on Information Retrieval}, journal-abbrev = {ECIR}, title = {A Full-Text Learning to Rank Dataset for Medical Information Retrieval}, url = {http://www.cl.uni-heidelberg.de/~riezler/publications/papers/ECIR2016.pdf}, year = {2016}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("NFCorpus") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 3956, "number_of_characters": 5786348, "num_documents": 3633, "min_document_length": 123, "average_document_length": 1590.783925130746, "max_document_length": 10090, "unique_documents": 3633, "num_queries": 323, "min_query_length": 3, "average_query_length": 21.764705882352942, "max_query_length": 72, "unique_queries": 323, "none_queries": 0, "num_relevant_docs": 12334, "min_relevant_docs_per_query": 1, "average_relevant_docs_per_query": 38.18575851393189, "max_relevant_docs_per_query": 475, "unique_relevant_docs": 3128, "num_instructions": null, "min_instruction_length": null, "average_instruction_length": null, "max_instruction_length": null, "unique_instructions": null, "num_top_ranked": null, "min_top_ranked_per_query": null, "average_top_ranked_per_query": null, "max_top_ranked_per_query": null } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/IN22-Conv
mteb
2025-05-04T16:08:53Z
14
0
[ "task_categories:translation", "annotations_creators:expert-annotated", "language_creators:expert-generated", "multilinguality:multilingual", "language:asm", "language:ben", "language:brx", "language:doi", "language:eng", "language:gom", "language:guj", "language:hin", "language:kan", "language:kas", "language:mai", "language:mal", "language:mar", "language:mni", "language:npi", "language:ory", "language:pan", "language:san", "language:sat", "language:snd", "language:tam", "language:tel", "language:urd", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "translation" ]
2024-05-14T21:57:25Z
null
--- annotations_creators: - expert-annotated language_creators: - expert-generated language: - asm - ben - brx - doi - eng - gom - guj - hin - kan - kas - mai - mal - mar - mni - npi - ory - pan - san - sat - snd - tam - tel - urd license: cc-by-4.0 multilinguality: multilingual size_categories: - 1K<n<10K task_categories: - translation task_ids: [] pretty_name: in22-conv language_details: asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr, hin_Deva, kan_Knda, kas_Arab, mai_Deva, mal_Mlym, mar_Deva, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Deva, tam_Taml, tel_Telu, urd_Arab configs: - config_name: default data_files: - split: test path: test.parquet tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">IN22ConvBitextMining</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> IN22-Conv is a n-way parallel conversation domain benchmark dataset for machine translation spanning English and 22 Indic languages. | | | |---------------|---------------------------------------------| | Task category | t2t | | Domains | Social, Spoken, Fiction, Spoken | | Reference | https://huggingface.co/datasets/ai4bharat/IN22-Conv | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["IN22ConvBitextMining"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @article{gala2023indictrans, author = {Jay Gala and Pranjal A Chitale and A K Raghavan and Varun Gumma and Sumanth Doddapaneni and Aswanth Kumar M and Janki Atul Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M Khapra and Raj Dabre and Anoop Kunchukuttan}, issn = {2835-8856}, journal = {Transactions on Machine Learning Research}, note = {}, title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages}, url = {https://openreview.net/forum?id=vfT4YuzAYA}, year = {2023}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("IN22ConvBitextMining") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 760518, "number_of_characters": 82637104, "unique_pairs": 759283, "min_sentence1_length": 3, "average_sentence1_length": 54.32948595562498, "max_sentence1_length": 239, "unique_sentence1": 34430, "min_sentence2_length": 3, "average_sentence2_length": 54.32948595562498, "max_sentence2_length": 239, "unique_sentence2": 34430 } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/mtop_domain
mteb
2025-05-04T16:08:07Z
3,789
2
[ "task_categories:text-classification", "annotations_creators:human-annotated", "multilinguality:multilingual", "language:deu", "language:eng", "language:fra", "language:hin", "language:spa", "language:tha", "license:unknown", "modality:text", "arxiv:2008.09335", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2022-05-19T15:04:17Z
null
--- annotations_creators: - human-annotated language: - deu - eng - fra - hin - spa - tha license: unknown multilinguality: multilingual task_categories: - text-classification task_ids: [] dataset_info: - config_name: de features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 748424 num_examples: 13424 - name: validation num_bytes: 100446 num_examples: 1815 - name: test num_bytes: 195937 num_examples: 3549 download_size: 530941 dataset_size: 1044807 - config_name: en features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 761023 num_examples: 15667 - name: validation num_bytes: 108483 num_examples: 2235 - name: test num_bytes: 214022 num_examples: 4386 download_size: 614336 dataset_size: 1083528 - config_name: es features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 621403 num_examples: 10934 - name: validation num_bytes: 87850 num_examples: 1527 - name: test num_bytes: 170223 num_examples: 2998 download_size: 395145 dataset_size: 879476 - config_name: fr features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 671550 num_examples: 11814 - name: validation num_bytes: 88815 num_examples: 1577 - name: test num_bytes: 182408 num_examples: 3193 download_size: 474182 dataset_size: 942773 - config_name: hi features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 1238687 num_examples: 11330 - name: validation num_bytes: 228095 num_examples: 2012 - name: test num_bytes: 303899 num_examples: 2789 download_size: 631631 dataset_size: 1770681 - config_name: th features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 1175051 num_examples: 10759 - name: validation num_bytes: 185878 num_examples: 1671 - name: test num_bytes: 301794 num_examples: 2765 download_size: 611035 dataset_size: 1662723 configs: - config_name: de data_files: - split: train path: de/train-* - split: validation path: de/validation-* - split: test path: de/test-* - config_name: en data_files: - split: train path: en/train-* - split: validation path: en/validation-* - split: test path: en/test-* - config_name: es data_files: - split: train path: es/train-* - split: validation path: es/validation-* - split: test path: es/test-* - config_name: fr data_files: - split: train path: fr/train-* - split: validation path: fr/validation-* - split: test path: fr/test-* - config_name: hi data_files: - split: train path: hi/train-* - split: validation path: hi/validation-* - split: test path: hi/test-* - config_name: th data_files: - split: train path: th/train-* - split: validation path: th/validation-* - split: test path: th/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MTOPDomainClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> MTOP: Multilingual Task-Oriented Semantic Parsing | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Spoken, Spoken | | Reference | https://arxiv.org/pdf/2008.09335.pdf | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["MTOPDomainClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{li-etal-2021-mtop, abstract = {Scaling semantic parsing models for task-oriented dialog systems to new languages is often expensive and time-consuming due to the lack of available datasets. Available datasets suffer from several shortcomings: a) they contain few languages b) they contain small amounts of labeled examples per language c) they are based on the simple intent and slot detection paradigm for non-compositional queries. In this paper, we present a new multilingual dataset, called MTOP, comprising of 100k annotated utterances in 6 languages across 11 domains. We use this dataset and other publicly available datasets to conduct a comprehensive benchmarking study on using various state-of-the-art multilingual pre-trained models for task-oriented semantic parsing. We achieve an average improvement of +6.3 points on Slot F1 for the two existing multilingual datasets, over best results reported in their experiments. Furthermore, we demonstrate strong zero-shot performance using pre-trained models combined with automatic translation and alignment, and a proposed distant supervision method to reduce the noise in slot label projection.}, address = {Online}, author = {Li, Haoran and Arora, Abhinav and Chen, Shuohui and Gupta, Anchit and Gupta, Sonal and Mehdad, Yashar}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume}, doi = {10.18653/v1/2021.eacl-main.257}, editor = {Merlo, Paola and Tiedemann, Jorg and Tsarfaty, Reut}, month = apr, pages = {2950--2962}, publisher = {Association for Computational Linguistics}, title = {{MTOP}: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark}, url = {https://aclanthology.org/2021.eacl-main.257}, year = {2021}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("MTOPDomainClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "validation": { "num_samples": 10837, "number_of_characters": 431895, "number_texts_intersect_with_train": 127, "min_text_length": 5, "average_text_length": 39.85374181046415, "max_text_length": 154, "unique_text": 10830, "unique_labels": 11, "labels": { "1": { "count": 1688 }, "10": { "count": 754 }, "7": { "count": 849 }, "3": { "count": 681 }, "6": { "count": 985 }, "2": { "count": 647 }, "9": { "count": 872 }, "0": { "count": 833 }, "5": { "count": 1182 }, "4": { "count": 982 }, "8": { "count": 1364 } }, "hf_subset_descriptive_stats": { "en": { "num_samples": 2235, "number_of_characters": 81663, "number_texts_intersect_with_train": 7, "min_text_length": 8, "average_text_length": 36.53825503355705, "max_text_length": 125, "unique_text": 2235, "unique_labels": 11, "labels": { "1": { "count": 329 }, "10": { "count": 185 }, "7": { "count": 183 }, "3": { "count": 134 }, "6": { "count": 186 }, "2": { "count": 123 }, "9": { "count": 196 }, "0": { "count": 176 }, "5": { "count": 228 }, "4": { "count": 207 }, "8": { "count": 288 } } }, "de": { "num_samples": 1815, "number_of_characters": 77727, "number_texts_intersect_with_train": 23, "min_text_length": 10, "average_text_length": 42.824793388429754, "max_text_length": 154, "unique_text": 1814, "unique_labels": 11, "labels": { "0": { "count": 99 }, "1": { "count": 303 }, "2": { "count": 104 }, "3": { "count": 122 }, "6": { "count": 165 }, "4": { "count": 157 }, "7": { "count": 141 }, "5": { "count": 203 }, "8": { "count": 220 }, "10": { "count": 133 }, "9": { "count": 168 } } }, "es": { "num_samples": 1527, "number_of_characters": 67720, "number_texts_intersect_with_train": 41, "min_text_length": 11, "average_text_length": 44.34839554682384, "max_text_length": 134, "unique_text": 1525, "unique_labels": 11, "labels": { "1": { "count": 197 }, "6": { "count": 166 }, "4": { "count": 138 }, "10": { "count": 103 }, "3": { "count": 104 }, "5": { "count": 190 }, "2": { "count": 115 }, "8": { "count": 212 }, "7": { "count": 82 }, "9": { "count": 76 }, "0": { "count": 144 } } }, "fr": { "num_samples": 1577, "number_of_characters": 68008, "number_texts_intersect_with_train": 12, "min_text_length": 11, "average_text_length": 43.12492073557387, "max_text_length": 141, "unique_text": 1575, "unique_labels": 11, "labels": { "0": { "count": 125 }, "1": { "count": 278 }, "2": { "count": 92 }, "3": { "count": 89 }, "4": { "count": 137 }, "7": { "count": 145 }, "6": { "count": 138 }, "5": { "count": 168 }, "8": { "count": 203 }, "9": { "count": 124 }, "10": { "count": 78 } } }, "hi": { "num_samples": 2012, "number_of_characters": 78749, "number_texts_intersect_with_train": 16, "min_text_length": 7, "average_text_length": 39.139662027833005, "max_text_length": 131, "unique_text": 2011, "unique_labels": 11, "labels": { "0": { "count": 161 }, "1": { "count": 304 }, "3": { "count": 126 }, "4": { "count": 193 }, "2": { "count": 109 }, "10": { "count": 154 }, "5": { "count": 208 }, "6": { "count": 167 }, "7": { "count": 172 }, "8": { "count": 235 }, "9": { "count": 183 } } }, "th": { "num_samples": 1671, "number_of_characters": 58028, "number_texts_intersect_with_train": 28, "min_text_length": 5, "average_text_length": 34.726511071214844, "max_text_length": 105, "unique_text": 1670, "unique_labels": 11, "labels": { "0": { "count": 128 }, "1": { "count": 277 }, "2": { "count": 104 }, "3": { "count": 106 }, "4": { "count": 150 }, "5": { "count": 185 }, "6": { "count": 163 }, "7": { "count": 126 }, "8": { "count": 206 }, "9": { "count": 125 }, "10": { "count": 101 } } } } }, "test": { "num_samples": 19680, "number_of_characters": 781580, "number_texts_intersect_with_train": 332, "min_text_length": 3, "average_text_length": 39.71443089430894, "max_text_length": 168, "unique_text": 19627, "unique_labels": 11, "labels": { "2": { "count": 977 }, "5": { "count": 2372 }, "6": { "count": 2014 }, "8": { "count": 2572 }, "9": { "count": 1317 }, "1": { "count": 3065 }, "10": { "count": 1330 }, "3": { "count": 1351 }, "0": { "count": 1459 }, "7": { "count": 1535 }, "4": { "count": 1688 } }, "hf_subset_descriptive_stats": { "en": { "num_samples": 4386, "number_of_characters": 161376, "number_texts_intersect_with_train": 15, "min_text_length": 3, "average_text_length": 36.79343365253078, "max_text_length": 132, "unique_text": 4384, "unique_labels": 11, "labels": { "2": { "count": 197 }, "5": { "count": 487 }, "6": { "count": 418 }, "8": { "count": 613 }, "9": { "count": 346 }, "1": { "count": 613 }, "10": { "count": 358 }, "3": { "count": 290 }, "0": { "count": 341 }, "7": { "count": 354 }, "4": { "count": 369 } } }, "de": { "num_samples": 3549, "number_of_characters": 151445, "number_texts_intersect_with_train": 69, "min_text_length": 7, "average_text_length": 42.67258382642998, "max_text_length": 162, "unique_text": 3536, "unique_labels": 11, "labels": { "0": { "count": 193 }, "10": { "count": 264 }, "1": { "count": 553 }, "2": { "count": 163 }, "3": { "count": 256 }, "5": { "count": 439 }, "4": { "count": 306 }, "6": { "count": 353 }, "7": { "count": 279 }, "8": { "count": 452 }, "9": { "count": 291 } } }, "es": { "num_samples": 2998, "number_of_characters": 130569, "number_texts_intersect_with_train": 97, "min_text_length": 6, "average_text_length": 43.552034689793196, "max_text_length": 168, "unique_text": 2983, "unique_labels": 11, "labels": { "1": { "count": 401 }, "6": { "count": 352 }, "4": { "count": 246 }, "10": { "count": 206 }, "3": { "count": 231 }, "5": { "count": 404 }, "2": { "count": 177 }, "8": { "count": 435 }, "7": { "count": 156 }, "9": { "count": 126 }, "0": { "count": 264 } } }, "fr": { "num_samples": 3193, "number_of_characters": 140029, "number_texts_intersect_with_train": 45, "min_text_length": 6, "average_text_length": 43.854995302223614, "max_text_length": 143, "unique_text": 3187, "unique_labels": 11, "labels": { "0": { "count": 253 }, "1": { "count": 551 }, "2": { "count": 159 }, "3": { "count": 190 }, "4": { "count": 280 }, "6": { "count": 330 }, "5": { "count": 356 }, "7": { "count": 272 }, "8": { "count": 462 }, "10": { "count": 159 }, "9": { "count": 181 } } }, "hi": { "num_samples": 2789, "number_of_characters": 104295, "number_texts_intersect_with_train": 32, "min_text_length": 7, "average_text_length": 37.395123700250984, "max_text_length": 148, "unique_text": 2785, "unique_labels": 11, "labels": { "0": { "count": 208 }, "1": { "count": 470 }, "5": { "count": 335 }, "3": { "count": 195 }, "4": { "count": 242 }, "2": { "count": 132 }, "6": { "count": 267 }, "7": { "count": 262 }, "8": { "count": 265 }, "10": { "count": 186 }, "9": { "count": 227 } } }, "th": { "num_samples": 2765, "number_of_characters": 93866, "number_texts_intersect_with_train": 74, "min_text_length": 6, "average_text_length": 33.94792043399638, "max_text_length": 117, "unique_text": 2754, "unique_labels": 11, "labels": { "0": { "count": 200 }, "1": { "count": 477 }, "2": { "count": 149 }, "3": { "count": 189 }, "4": { "count": 245 }, "6": { "count": 294 }, "5": { "count": 351 }, "7": { "count": 212 }, "8": { "count": 345 }, "9": { "count": 146 }, "10": { "count": 157 } } } } }, "train": { "num_samples": 73928, "number_of_characters": 2937230, "number_texts_intersect_with_train": null, "min_text_length": 3, "average_text_length": 39.73095444215994, "max_text_length": 216, "unique_text": 73219, "unique_labels": 11, "labels": { "0": { "count": 5262 }, "5": { "count": 8334 }, "6": { "count": 6961 }, "9": { "count": 5313 }, "1": { "count": 11107 }, "8": { "count": 9698 }, "10": { "count": 5084 }, "2": { "count": 4770 }, "4": { "count": 6644 }, "3": { "count": 5191 }, "7": { "count": 5564 } }, "hf_subset_descriptive_stats": { "en": { "num_samples": 15667, "number_of_characters": 572977, "number_texts_intersect_with_train": null, "min_text_length": 4, "average_text_length": 36.57222186761984, "max_text_length": 148, "unique_text": 15634, "unique_labels": 11, "labels": { "0": { "count": 1165 }, "5": { "count": 1657 }, "6": { "count": 1402 }, "9": { "count": 1303 }, "1": { "count": 2187 }, "8": { "count": 2157 }, "10": { "count": 1219 }, "2": { "count": 929 }, "4": { "count": 1353 }, "3": { "count": 1064 }, "7": { "count": 1231 } } }, "de": { "num_samples": 13424, "number_of_characters": 580266, "number_texts_intersect_with_train": null, "min_text_length": 5, "average_text_length": 43.226013110846246, "max_text_length": 174, "unique_text": 13264, "unique_labels": 11, "labels": { "0": { "count": 761 }, "10": { "count": 996 }, "4": { "count": 1185 }, "1": { "count": 2016 }, "7": { "count": 1029 }, "5": { "count": 1484 }, "2": { "count": 814 }, "3": { "count": 980 }, "6": { "count": 1265 }, "8": { "count": 1767 }, "9": { "count": 1127 } } }, "es": { "num_samples": 10934, "number_of_characters": 476798, "number_texts_intersect_with_train": null, "min_text_length": 6, "average_text_length": 43.60691421254801, "max_text_length": 186, "unique_text": 10740, "unique_labels": 11, "labels": { "1": { "count": 1459 }, "6": { "count": 1188 }, "4": { "count": 928 }, "10": { "count": 743 }, "3": { "count": 830 }, "5": { "count": 1396 }, "2": { "count": 823 }, "8": { "count": 1555 }, "7": { "count": 525 }, "9": { "count": 560 }, "0": { "count": 927 } } }, "fr": { "num_samples": 11814, "number_of_characters": 515029, "number_texts_intersect_with_train": null, "min_text_length": 5, "average_text_length": 43.594802776367025, "max_text_length": 184, "unique_text": 11727, "unique_labels": 11, "labels": { "0": { "count": 861 }, "10": { "count": 668 }, "1": { "count": 1968 }, "7": { "count": 975 }, "5": { "count": 1261 }, "2": { "count": 799 }, "3": { "count": 734 }, "4": { "count": 1082 }, "6": { "count": 1113 }, "8": { "count": 1656 }, "9": { "count": 697 } } }, "hi": { "num_samples": 11330, "number_of_characters": 425919, "number_texts_intersect_with_train": null, "min_text_length": 4, "average_text_length": 37.592144748455425, "max_text_length": 216, "unique_text": 11251, "unique_labels": 11, "labels": { "0": { "count": 794 }, "1": { "count": 1741 }, "7": { "count": 974 }, "2": { "count": 670 }, "3": { "count": 831 }, "5": { "count": 1272 }, "6": { "count": 940 }, "4": { "count": 1073 }, "10": { "count": 786 }, "8": { "count": 1281 }, "9": { "count": 968 } } }, "th": { "num_samples": 10759, "number_of_characters": 366241, "number_texts_intersect_with_train": null, "min_text_length": 3, "average_text_length": 34.04043126684636, "max_text_length": 135, "unique_text": 10622, "unique_labels": 11, "labels": { "0": { "count": 754 }, "10": { "count": 672 }, "1": { "count": 1736 }, "7": { "count": 830 }, "2": { "count": 735 }, "3": { "count": 752 }, "5": { "count": 1264 }, "6": { "count": 1053 }, "4": { "count": 1023 }, "8": { "count": 1282 }, "9": { "count": 658 } } } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
nhagar/moscar_urls
nhagar
2025-05-04T16:01:05Z
5
0
[ "task_categories:text-generation", "license:cc-by-4.0", "size_categories:100M<n<1B", "region:us" ]
[ "text-generation" ]
2025-04-26T18:14:46Z
null
--- license: cc-by-4.0 task_categories: - text-generation size_categories: - 100M<n<1B --- # Dataset Card for moscar_urls This dataset provides the URLs and top-level domains associated with training records in [oscar-corpus/mOSCAR](https://huggingface.co/datasets/oscar-corpus/mOSCAR). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible. ## Dataset Details ### Dataset Description This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy). - **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy) - **License:** Same as source dataset ### Dataset Sources - **Repository:** [oscar-corpus/mOSCAR](https://huggingface.co/datasets/oscar-corpus/mOSCAR) ## Uses This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data. ### Direct Use The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve: - Identifying the most-used websites - Categorizing URLs to understand domain- or topic-level dataset composition - Comparing URLs across datasets - Digging into inclusion/exclusion patterns for a particular website ### Out-of-Scope Use This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset. ## Dataset Structure This dataset contains every record with a URL from the source dataset. It contains two columns: - `url`: The raw URL associated with each record - `domain`: The top-level domain for each URL, extracted with `tldextract` ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
GitBag/a_star_final_a_star_dapo_7_actor_aime-25_eval
GitBag
2025-05-04T15:51:12Z
0
0
[ "region:us" ]
[]
2025-05-04T15:51:11Z
null
--- dataset_info: features: - name: problem dtype: string - name: answer dtype: string - name: response_0 dtype: string - name: response_1 dtype: string - name: response_2 dtype: string - name: response_3 dtype: string - name: response_4 dtype: string - name: response_5 dtype: string - name: response_6 dtype: string - name: response_7 dtype: string - name: response_8 dtype: string - name: response_9 dtype: string - name: response_10 dtype: string - name: response_11 dtype: string - name: response_12 dtype: string - name: response_13 dtype: string - name: response_14 dtype: string - name: response_15 dtype: string - name: response_16 dtype: string - name: response_17 dtype: string - name: response_18 dtype: string - name: response_19 dtype: string - name: response_20 dtype: string - name: response_21 dtype: string - name: response_22 dtype: string - name: response_23 dtype: string - name: response_24 dtype: string - name: response_25 dtype: string - name: response_26 dtype: string - name: response_27 dtype: string - name: response_28 dtype: string - name: response_29 dtype: string - name: response_30 dtype: string - name: response_31 dtype: string - name: eval_0 dtype: bool - name: eval_1 dtype: bool - name: eval_2 dtype: bool - name: eval_3 dtype: bool - name: eval_4 dtype: bool - name: eval_5 dtype: bool - name: eval_6 dtype: bool - name: eval_7 dtype: bool - name: eval_8 dtype: bool - name: eval_9 dtype: bool - name: eval_10 dtype: bool - name: eval_11 dtype: bool - name: eval_12 dtype: bool - name: eval_13 dtype: bool - name: eval_14 dtype: bool - name: eval_15 dtype: bool - name: eval_16 dtype: bool - name: eval_17 dtype: bool - name: eval_18 dtype: bool - name: eval_19 dtype: bool - name: eval_20 dtype: bool - name: eval_21 dtype: bool - name: eval_22 dtype: bool - name: eval_23 dtype: bool - name: eval_24 dtype: bool - name: eval_25 dtype: bool - name: eval_26 dtype: bool - name: eval_27 dtype: bool - name: eval_28 dtype: bool - name: eval_29 dtype: bool - name: eval_30 dtype: bool - name: eval_31 dtype: bool splits: - name: train num_bytes: 1793257 num_examples: 30 download_size: 1010728 dataset_size: 1793257 configs: - config_name: default data_files: - split: train path: data/train-* ---
Oriolshhh/parlabe-errors-ortografia-45k
Oriolshhh
2025-05-04T15:45:45Z
0
0
[ "language:ca", "license:apache-2.0", "size_categories:10K<n<100K", "region:us", "català", "grammar-correction", "ortografia", "text-to-text", "synthetic" ]
[]
2025-05-04T15:43:55Z
null
--- language: ca license: apache-2.0 tags: - català - grammar-correction - ortografia - text-to-text - synthetic size_categories: - 10K<n<100K --- # Dataset d’errors ortogràfics en català (45.000 parelles) Aquest dataset conté **45.000 parelles de frases** en format: ```text_erroni,text_correcte``` Està dissenyat per entrenar models de **correcció ortogràfica general** en català, abastant una gran varietat d’errors comuns en l’escriptura manual, digitació ràpida, ASR o OCR. --- ## Què inclou? Aquestes parelles cobreixen errors com: - Lletres intercanviades o repetides: *Axo és una prova* → *Això és una prova* - Omissions o afegits de caràcters: *probablament* → *probablement* - Substitucions de sons semblants: *conexió* → *connexió* - Errors de segmentació: *dela feina* → *de la feina* - Confusions típiques en digitació: *hiverncle* → *hivernacle* --- ## Procés de generació Les frases han estat generades a partir de: - **API de GPT**, que va crear errors ortogràfics realistes - **Automatització amb scripts en Python** - **Filtratge i validació** per assegurar qualitat lingüística i diversitat --- ## Format - Llengua: Català (`ca`) - Format: `.csv` amb dues columnes: - `text_erroni` - `text_correcte` - Nombre de parelles: 45.000
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_0_for_gen_10_v2
HungVu2003
2025-05-04T15:43:36Z
0
0
[ "region:us" ]
[]
2025-05-04T15:43:34Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 2722918 num_examples: 13750 download_size: 960160 dataset_size: 2722918 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_0_for_gen_15_v2
HungVu2003
2025-05-04T14:40:44Z
0
0
[ "region:us" ]
[]
2025-05-04T14:40:43Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 2001777 num_examples: 13750 download_size: 1099680 dataset_size: 2001777 configs: - config_name: default data_files: - split: train path: data/train-* ---
UNISG-MCS/NLP
UNISG-MCS
2025-05-04T14:05:52Z
0
0
[ "region:us" ]
[]
2025-05-04T13:58:02Z
null
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: labels sequence: int64 splits: - name: train num_bytes: 8630400 num_examples: 2400 - name: validation num_bytes: 2168388 num_examples: 603 download_size: 0 dataset_size: 10798788 --- # Dataset Card for "NLP" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hf-doc-build/doc-build
hf-doc-build
2025-05-04T13:19:01Z
319,590
9
[ "license:mit", "region:us" ]
[]
2022-10-24T15:39:05Z
null
--- license: mit pretty_name: Generated Docs for HF viewer: false --- This repo contains all the docs published on https://huggingface.co/docs. The docs are generated with https://github.com/huggingface/doc-builder. <!-- comment to trigger webhook.= -->
mteb/flores
mteb
2025-05-04T13:09:38Z
30
0
[ "task_categories:translation", "annotations_creators:human-annotated", "multilinguality:multilingual", "language:ace", "language:acm", "language:acq", "language:aeb", "language:afr", "language:ajp", "language:aka", "language:als", "language:amh", "language:apc", "language:arb", "language:ars", "language:ary", "language:arz", "language:asm", "language:ast", "language:awa", "language:ayr", "language:azb", "language:azj", "language:bak", "language:bam", "language:ban", "language:bel", "language:bem", "language:ben", "language:bho", "language:bjn", "language:bod", "language:bos", "language:bug", "language:bul", "language:cat", "language:ceb", "language:ces", "language:cjk", "language:ckb", "language:crh", "language:cym", "language:dan", "language:deu", "language:dik", "language:dyu", "language:dzo", "language:ell", "language:eng", "language:epo", "language:est", "language:eus", "language:ewe", "language:fao", "language:fij", "language:fin", "language:fon", "language:fra", "language:fur", "language:fuv", "language:gaz", "language:gla", "language:gle", "language:glg", "language:grn", "language:guj", "language:hat", "language:hau", "language:heb", "language:hin", "language:hne", "language:hrv", "language:hun", "language:hye", "language:ibo", "language:ilo", "language:ind", "language:isl", "language:ita", "language:jav", "language:jpn", "language:kab", "language:kac", "language:kam", "language:kan", "language:kas", "language:kat", "language:kaz", "language:kbp", "language:kea", "language:khk", "language:khm", "language:kik", "language:kin", "language:kir", "language:kmb", "language:kmr", "language:knc", "language:kon", "language:kor", "language:lao", "language:lij", "language:lim", "language:lin", "language:lit", "language:lmo", "language:ltg", "language:ltz", "language:lua", "language:lug", "language:luo", "language:lus", "language:lvs", "language:mag", "language:mai", "language:mal", "language:mar", "language:min", "language:mkd", "language:mlt", "language:mni", "language:mos", "language:mri", "language:mya", "language:nld", "language:nno", "language:nob", "language:npi", "language:nso", "language:nus", "language:nya", "language:oci", "language:ory", "language:pag", "language:pan", "language:pap", "language:pbt", "language:pes", "language:plt", "language:pol", "language:por", "language:prs", "language:quy", "language:ron", "language:run", "language:rus", "language:sag", "language:san", "language:sat", "language:scn", "language:shn", "language:sin", "language:slk", "language:slv", "language:smo", "language:sna", "language:snd", "language:som", "language:sot", "language:spa", "language:srd", "language:srp", "language:ssw", "language:sun", "language:swe", "language:swh", "language:szl", "language:tam", "language:taq", "language:tat", "language:tel", "language:tgk", "language:tgl", "language:tha", "language:tir", "language:tpi", "language:tsn", "language:tso", "language:tuk", "language:tum", "language:tur", "language:twi", "language:tzm", "language:uig", "language:ukr", "language:umb", "language:urd", "language:uzn", "language:vec", "language:vie", "language:war", "language:wol", "language:xho", "language:ydd", "language:yor", "language:yue", "language:zho", "language:zsm", "language:zul", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "translation" ]
2024-05-14T20:39:18Z
null
--- annotations_creators: - human-annotated language: - ace - acm - acq - aeb - afr - ajp - aka - als - amh - apc - arb - ars - ary - arz - asm - ast - awa - ayr - azb - azj - bak - bam - ban - bel - bem - ben - bho - bjn - bod - bos - bug - bul - cat - ceb - ces - cjk - ckb - crh - cym - dan - deu - dik - dyu - dzo - ell - eng - epo - est - eus - ewe - fao - fij - fin - fon - fra - fur - fuv - gaz - gla - gle - glg - grn - guj - hat - hau - heb - hin - hne - hrv - hun - hye - ibo - ilo - ind - isl - ita - jav - jpn - kab - kac - kam - kan - kas - kat - kaz - kbp - kea - khk - khm - kik - kin - kir - kmb - kmr - knc - kon - kor - lao - lij - lim - lin - lit - lmo - ltg - ltz - lua - lug - luo - lus - lvs - mag - mai - mal - mar - min - mkd - mlt - mni - mos - mri - mya - nld - nno - nob - npi - nso - nus - nya - oci - ory - pag - pan - pap - pbt - pes - plt - pol - por - prs - quy - ron - run - rus - sag - san - sat - scn - shn - sin - slk - slv - smo - sna - snd - som - sot - spa - srd - srp - ssw - sun - swe - swh - szl - tam - taq - tat - tel - tgk - tgl - tha - tir - tpi - tsn - tso - tuk - tum - tur - twi - tzm - uig - ukr - umb - urd - uzn - vec - vie - war - wol - xho - ydd - yor - yue - zho - zsm - zul license: cc-by-sa-4.0 multilinguality: multilingual task_categories: - translation task_ids: [] configs: - config_name: default data_files: - split: dev path: dev.parquet - split: devtest path: devtest.parquet tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">FloresBitextMining</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> FLORES is a benchmark dataset for machine translation between English and low-resource languages. | | | |---------------|---------------------------------------------| | Task category | t2t | | Domains | Non-fiction, Encyclopaedic, Written | | Reference | https://huggingface.co/datasets/facebook/flores | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["FloresBitextMining"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{goyal2022flores, author = {Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm{\'a}n, Francisco}, booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, pages = {19--35}, title = {The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation}, year = {2022}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("FloresBitextMining") desc_stats = task.metadata.descriptive_stats ``` ```json { "devtest": { "num_samples": 41908944, "number_of_characters": 11221665014, "unique_pairs": 41545149, "min_sentence1_length": 10, "average_sentence1_length": 133.88150527009222, "max_sentence1_length": 597, "unique_sentence1": 205519, "min_sentence2_length": 10, "average_sentence2_length": 133.88150527009222, "max_sentence2_length": 597, "unique_sentence2": 205519 } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_1_for_gen_13_v2
HungVu2003
2025-05-04T12:51:41Z
0
0
[ "region:us" ]
[]
2025-05-04T12:51:39Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 6681057 num_examples: 13750 download_size: 3326270 dataset_size: 6681057 configs: - config_name: default data_files: - split: train path: data/train-* ---
MBZUAI-IFM/AM_clean_final_90perc
MBZUAI-IFM
2025-05-04T12:03:06Z
0
0
[ "region:us" ]
[]
2025-05-04T11:55:18Z
null
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string - name: dataset_source dtype: string - name: metadata dtype: string splits: - name: train num_bytes: 28212498400 num_examples: 1260000 download_size: 13048260960 dataset_size: 28212498400 configs: - config_name: default data_files: - split: train path: data/train-* ---
beyoru/FC_bench_800
beyoru
2025-05-04T11:59:29Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T09:31:36Z
null
--- dataset_info: features: - name: question dtype: string - name: functions dtype: string - name: model_gen dtype: string - name: model_gen_base dtype: string splits: - name: train num_bytes: 733098 num_examples: 812 download_size: 269026 dataset_size: 733098 configs: - config_name: default data_files: - split: train path: data/train-* ---
wyyyz139/character
wyyyz139
2025-05-04T10:22:48Z
2
0
[ "license:apache-2.0", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "region:us" ]
[]
2025-05-04T02:57:50Z
null
--- license: apache-2.0 ---
Yuyeong/rw_pubmed_mdlr_2_mask_public
Yuyeong
2025-05-04T10:06:57Z
0
0
[ "region:us" ]
[]
2025-05-04T10:06:25Z
null
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' - name: group_idx dtype: int64 - name: node_idx dtype: int64 splits: - name: train num_bytes: 9996190.376832176 num_examples: 6000 - name: validation num_bytes: 83301586.47360146 num_examples: 50000 - name: test num_bytes: 166603172.94720292 num_examples: 100000 download_size: 131239520 dataset_size: 259900949.79763657 --- # Dataset Card for "rw_pubmed_mdlr_2_mask_public" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BramVanroy/CommonCrawl-CreativeCommons
BramVanroy
2025-05-04T10:00:59Z
98
7
[ "task_categories:text-generation", "task_ids:language-modeling", "language:afr", "language:deu", "language:eng", "language:fra", "language:fry", "language:ita", "language:nld", "language:spa", "language:af", "language:de", "language:en", "language:fr", "language:fy", "language:it", "language:nl", "language:es", "license:cc", "size_categories:100M<n<1B", "modality:text", "doi:10.57967/hf/5340", "region:us" ]
[ "text-generation" ]
2025-01-28T13:12:13Z
6
--- license: cc task_categories: - text-generation task_ids: - language-modeling pretty_name: Common Crawl Creative Commons Corpus (C5) language: - afr - deu - eng - fra - fry - ita - nld - spa - af - de - en - fr - fy - it - nl - es configs: - config_name: v1 data_files: - data/CC-MAIN-2019-30/**/*.parquet - data/CC-MAIN-2020-05/**/*.parquet - data/CC-MAIN-2022-05/**/*.parquet - data/CC-MAIN-2023-06/**/*.parquet - data/CC-MAIN-2024-46/**/*.parquet - data/CC-MAIN-2024-51/**/*.parquet - data/CC-MAIN-2025-05/**/*.parquet - config_name: default data_files: data/**/*.parquet # Languages - config_name: afr data_files: data/**/afr/*.parquet - config_name: deu data_files: data/**/deu/*.parquet - config_name: eng data_files: data/**/eng/*.parquet - config_name: spa data_files: data/**/spa/*.parquet - config_name: fra data_files: data/**/fra/*.parquet - config_name: fry data_files: data/**/fry/*.parquet - config_name: ita data_files: data/**/ita/*.parquet - config_name: nld data_files: data/**/nld/*.parquet # Per-crawl # CC-MAIN-2019-30 - config_name: CC-MAIN-2019-30 data_files: data/CC-MAIN-2019-30/**/*.parquet - config_name: CC-MAIN-2019-30-afr data_files: data/CC-MAIN-2019-30/afr/*.parquet - config_name: CC-MAIN-2019-30-deu data_files: data/CC-MAIN-2019-30/deu/*.parquet - config_name: CC-MAIN-2019-30-eng data_files: data/CC-MAIN-2019-30/eng/*.parquet - config_name: CC-MAIN-2019-30-spa data_files: data/CC-MAIN-2019-30/spa/*.parquet - config_name: CC-MAIN-2019-30-fra data_files: data/CC-MAIN-2019-30/fra/*.parquet - config_name: CC-MAIN-2019-30-fry data_files: data/CC-MAIN-2019-30/fry/*.parquet - config_name: CC-MAIN-2019-30-ita data_files: data/CC-MAIN-2019-30/ita/*.parquet - config_name: CC-MAIN-2019-30-nld data_files: data/CC-MAIN-2019-30/nld/*.parquet # CC-MAIN-2020-05 - config_name: CC-MAIN-2020-05 data_files: data/CC-MAIN-2020-05/**/*.parquet - config_name: CC-MAIN-2020-05-afr data_files: data/CC-MAIN-2020-05/afr/*.parquet - config_name: CC-MAIN-2020-05-deu data_files: data/CC-MAIN-2020-05/deu/*.parquet - config_name: CC-MAIN-2020-05-eng data_files: data/CC-MAIN-2020-05/eng/*.parquet - config_name: CC-MAIN-2020-05-spa data_files: data/CC-MAIN-2020-05/spa/*.parquet - config_name: CC-MAIN-2020-05-fra data_files: data/CC-MAIN-2020-05/fra/*.parquet - config_name: CC-MAIN-2020-05-fry data_files: data/CC-MAIN-2020-05/fry/*.parquet - config_name: CC-MAIN-2020-05-ita data_files: data/CC-MAIN-2020-05/ita/*.parquet - config_name: CC-MAIN-2020-05-nld data_files: data/CC-MAIN-2020-05/nld/*.parquet # CC-MAIN-2022-05 - config_name: CC-MAIN-2022-05 data_files: data/CC-MAIN-2022-05/**/*.parquet - config_name: CC-MAIN-2022-05-afr data_files: data/CC-MAIN-2022-05/afr/*.parquet - config_name: CC-MAIN-2022-05-deu data_files: data/CC-MAIN-2022-05/deu/*.parquet - config_name: CC-MAIN-2022-05-eng data_files: data/CC-MAIN-2022-05/eng/*.parquet - config_name: CC-MAIN-2022-05-spa data_files: data/CC-MAIN-2022-05/spa/*.parquet - config_name: CC-MAIN-2022-05-fra data_files: data/CC-MAIN-2022-05/fra/*.parquet - config_name: CC-MAIN-2022-05-fry data_files: data/CC-MAIN-2022-05/fry/*.parquet - config_name: CC-MAIN-2022-05-ita data_files: data/CC-MAIN-2022-05/ita/*.parquet - config_name: CC-MAIN-2022-05-nld data_files: data/CC-MAIN-2022-05/nld/*.parquet # CC-MAIN-2023-06 - config_name: CC-MAIN-2023-06 data_files: data/CC-MAIN-2023-06/**/*.parquet - config_name: CC-MAIN-2023-06-afr data_files: data/CC-MAIN-2023-06/afr/*.parquet - config_name: CC-MAIN-2023-06-deu data_files: data/CC-MAIN-2023-06/deu/*.parquet - config_name: CC-MAIN-2023-06-eng data_files: data/CC-MAIN-2023-06/eng/*.parquet - config_name: CC-MAIN-2023-06-spa data_files: data/CC-MAIN-2023-06/spa/*.parquet - config_name: CC-MAIN-2023-06-fra data_files: data/CC-MAIN-2023-06/fra/*.parquet - config_name: CC-MAIN-2023-06-fry data_files: data/CC-MAIN-2023-06/fry/*.parquet - config_name: CC-MAIN-2023-06-ita data_files: data/CC-MAIN-2023-06/ita/*.parquet - config_name: CC-MAIN-2023-06-nld data_files: data/CC-MAIN-2023-06/nld/*.parquet # CC-MAIN-2024-46 - config_name: CC-MAIN-2024-46 data_files: data/CC-MAIN-2024-46/**/*.parquet - config_name: CC-MAIN-2024-46-afr data_files: data/CC-MAIN-2024-46/afr/*.parquet - config_name: CC-MAIN-2024-46-deu data_files: data/CC-MAIN-2024-46/deu/*.parquet - config_name: CC-MAIN-2024-46-eng data_files: data/CC-MAIN-2024-46/eng/*.parquet - config_name: CC-MAIN-2024-46-spa data_files: data/CC-MAIN-2024-46/spa/*.parquet - config_name: CC-MAIN-2024-46-fra data_files: data/CC-MAIN-2024-46/fra/*.parquet - config_name: CC-MAIN-2024-46-fry data_files: data/CC-MAIN-2024-46/fry/*.parquet - config_name: CC-MAIN-2024-46-ita data_files: data/CC-MAIN-2024-46/ita/*.parquet - config_name: CC-MAIN-2024-46-nld data_files: data/CC-MAIN-2024-46/nld/*.parquet # CC-MAIN-2024-51 - config_name: CC-MAIN-2024-51 data_files: data/CC-MAIN-2024-51/**/*.parquet - config_name: CC-MAIN-2024-51-afr data_files: data/CC-MAIN-2024-51/afr/*.parquet - config_name: CC-MAIN-2024-51-deu data_files: data/CC-MAIN-2024-51/deu/*.parquet - config_name: CC-MAIN-2024-51-eng data_files: data/CC-MAIN-2024-51/eng/*.parquet - config_name: CC-MAIN-2024-51-spa data_files: data/CC-MAIN-2024-51/spa/*.parquet - config_name: CC-MAIN-2024-51-fra data_files: data/CC-MAIN-2024-51/fra/*.parquet - config_name: CC-MAIN-2024-51-fry data_files: data/CC-MAIN-2024-51/fry/*.parquet - config_name: CC-MAIN-2024-51-ita data_files: data/CC-MAIN-2024-51/ita/*.parquet - config_name: CC-MAIN-2024-51-nld data_files: data/CC-MAIN-2024-51/nld/*.parquet # CC-MAIN-2025-05 - config_name: CC-MAIN-2025-05 data_files: data/CC-MAIN-2025-05/**/*.parquet - config_name: CC-MAIN-2025-05-afr data_files: data/CC-MAIN-2025-05/afr/*.parquet - config_name: CC-MAIN-2025-05-deu data_files: data/CC-MAIN-2025-05/deu/*.parquet - config_name: CC-MAIN-2025-05-eng data_files: data/CC-MAIN-2025-05/eng/*.parquet - config_name: CC-MAIN-2025-05-spa data_files: data/CC-MAIN-2025-05/spa/*.parquet - config_name: CC-MAIN-2025-05-fra data_files: data/CC-MAIN-2025-05/fra/*.parquet - config_name: CC-MAIN-2025-05-fry data_files: data/CC-MAIN-2025-05/fry/*.parquet - config_name: CC-MAIN-2025-05-ita data_files: data/CC-MAIN-2025-05/ita/*.parquet - config_name: CC-MAIN-2025-05-nld data_files: data/CC-MAIN-2025-05/nld/*.parquet --- # The Common Crawl Creative Commons Corpus (C5) > **Raw CommonCrawl crawls, annotated with Creative Commons license information** C5 is an effort to collect Creative Commons-licensed web data in one place. The licensing information is extracted from the web pages based on whether they link to Creative Commons licenses either overtly in `a` tags (like in the footer of Wikipedia) or in metadata fields indicating deliberate Creative Commons publication. **However, false positives may occur! See Recommendations and Caveats below!** Also see [Personal and Sensitive Information](#personal-and-sensitive-information). ## Code I am very grateful to the Flemish Supercomputer to provide compute necessary to create this dataset, but as you can tell there is still a lot of data left to be processed. Therefore, I am happy to collaborate to process as many Common Crawl crawls as possible. [Shoot me a message](mailto:[email protected]) if you want to sponsor this project with compute! You can also simply run the code yourself if you'd like. You can find the whole code base, based on `datatrove`, on [Github](https://github.com/BramVanroy/CommonCrawl-CreativeCommons). If you use the code, please [reference my work](https://github.com/BramVanroy/CommonCrawl-CreativeCommons?tab=readme-ov-file#citation) accordingly and share your processed crawls with the rest of the world (or get in touch with me so I can add them to this repo). ## Usage ```python from datasets import load_dataset # Everything, most recent -- massive, you will need streaming ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", streaming=True) # v1 (2019-30, 2020-05, 2022-05, 2023-06, 2024-51, 2025-05, 2024-46) ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "v1", streaming=True) # Single dump, all languages -- large, you may need streaming on non-server hardware ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30") # Single language, all dumps -- very large, you will likely need streaming ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "nld", streaming=True) # Single language, single dump ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30-nld") ``` ## Progress In the `v1` release, the following crawls are included - CC-MAIN-2019-30 - CC-MAIN-2020-05 - CC-MAIN-2023-06 - CC-MAIN-2024-51 - CC-MAIN-2024-46 - CC-MAIN-2025-05 - CC-MAIN-2022-05 Other crawls are continuously being added. ## Languages The following languages are included. This is a limited set due to computational and storage limitations. - Afrikaans: afr - German: deu - English: eng - French: fra - Frysian: fry - Italian: ita - Dutch: nld - Spanish: spa ## Quantity Detailed number of tokens (Llama 3.3 tokenizer) and number of documents are given in the [counts.json](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons/blob/main/counts.json) file. | Language | Number of Documents | Number of Tokens | | --------- | ------------------- | ------------------- | | afr | 312,262 | 358,873,448 | | deu | 9,530,746 | 11,362,859,534 | | eng | 92,635,372 | 87,537,859,958 | | fra | 9,234,900 | 12,366,480,025 | | fry | 230,910 | 197,430,774 | | ita | 10,734,597 | 11,913,669,333 | | nld | 2,827,636 | 2,757,074,705 | | spa | 22,226,944 | 22,515,709,432 | | **Total** | **147,733,367** | **149,009,957,209** | ## Fields In some cases, multiple licenses are found on a single page. All licenses are collected in `potential_licenses`. From these, the "best guess" is selected based on three criteria: 1. location_preference_order: meta_tag, json-ld, link_tag, a_tag 2. head_preference_order: True, False 3. footer_preference_order: True, False Based on these criteria, the "best guessed" license is picked as the one in the `license_*` columns. Potential disagreement between multiple licenses is given in `license_disagreement`. - text: the extracted text (unmodified) - id: WARC-Record-ID - dump: Common Crawl crawl - url: original url for document - date: crawl date - file_path: file path on the S3 bucket - license_abbr: the license type. Possible values: "cc-unknown" (recommended to filter this one out), "by", "by-sa", "by-nd", "by-nc", "by-nc-sa", "by-nc-nd", "zero", "certification", "mark". If multiple licenses were found (`potential_licenses`) - license_version: the license version, e.g. "4.0" - license_location: the location where the license was found. Possible values: "meta_tag", "json-ld", "link_tag", "a_tag" - license_in_head: whether the license was found inside a `head` HTML element - license_in_footer: whether the license was found inside a `footer` HTML element, or an HTML element that had `footer` in the ID or class name - potential_licenses: - abbr: list of all found license abbreviations - version: list of all found license versions - location: list of all found license locations - in_head: list of whether licenses were found in the head - in_footer: list of whether licenses were found in a footer - license_parse_error: whether there was a problem when trying to extract the license, e.g. an unparseable HTML document - license_disagreement: whether the `potential_licenses["abbr"]` disagree, i.e., different types of licenses were found. License *versions* are not included in the comparison! - language: the language, as detected by glotlid - language_score: the language identification confidence score - found_in_fw: whether this sample was found in FineWeb(-2). For non-English, crawls that are more recent than FW2 (everything after 2024-18) is marked as None. For English, crawls that are more recent than FW v1.3 is marked as None (after 2024-51). ## Recommendations and Caveats - Raw CommonCrawl data is processed in an attempt to extract licensing information. No quality filtering is done!! It is **highly** recommended to filter this data further on quality, fluency, toxicity, etc. - Similarly, the data has **not been deduplicated**. - The licenses include all possible Creative Commons licenses, including non-commercial ones. Take care about what kind of data you wish to use, and filter out non-commercial licenses when needed. - The column `license_disagreement` indicates whether multiple licenses were found that have not the same abbreviation, e.g. `cc-by` and `cc-by-nc`. It is recommended to filter these out. - The column `license_parse_error` indicates whether an error occurred when parsing the license. You probably want to filter out documents where this was the case, though this should be extremely rare. - Unsurpisingly, the data contains a lot of Wikipedia/Wikimedia content. Depending on what you need, you may wish to filter those out. For Wikipedia specifically, you may opt to use the more thoroughly parsed (but potentially more outdated) [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) set. - In exceptional cases, a link to creativecommons.org is found but the exact license could not be found. These are under `license_abbr="cc-unknown"` which you may wish to filter out. Recommendation: ```python from datasets import load_dataset ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30", split="train") ds = ds.filter( lambda x: ( (not x["license_disagreement"]) and # Only use pages with a consistent license x["found_in_fw"] and # Only use pages that are in FineWeb(-2) "nc" not in x["license_abbr"] and # Exclude non-commercial licenses x["license_abbr"] != "cc-unknown" and # Exclude unknown licenses "wiki" not in x["url"] # Exclude Wiki-like pages (best to get those from a more reliable parser) ), num_proc=16 ) ``` ## Personal and Sensitive Information C5 is a heavily filtered version of the Common Crawl dataset. CommonCrawl respects robots.txt and will not include websites if their robots.txt say so. Even so, if you find that your website was included you can submit a [removal request](https://docs.google.com/forms/d/e/1FAIpQLSddAIuUui5xnAzBqft6MnzPYihr-AaS-Nj8x01Y6AM8NQ0YLQ/viewform?usp=sharing) indicating that you are the owner of the website. Take-down notices on other Common Crawl-based datasets such as FineWeb are considered. Domains specified and verified in those take-down notices are not included in this dataset. In this dataset, measures are taken to anonymise email addresses and public IP addresses following the [FineWeb-2 approach](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#personal-and-sensitive-information-and-opt-out). Email addresses matching a regular expression are replaced with `[email protected]`. Similarly, IP addresses allocated for [public networks](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) are replaced by unused IP addresses. Despite these best efforts on such large volumes of text, you may still encounter that your personal information is present in the dataset. In that case you can submit a [removal request](https://docs.google.com/forms/d/e/1FAIpQLSddAIuUui5xnAzBqft6MnzPYihr-AaS-Nj8x01Y6AM8NQ0YLQ/viewform?usp=sharing). ## Citation In the current absence of a publication, please cite [the dataset](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons) as follows. Including a footnote url to this page is also appreciated! ```bibtex @misc{vanroy2025C5, author = { Bram Vanroy }, title = { CommonCrawl CreativeCommons Corpus (C5) }, year = 2025, url = { https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons }, doi = { 10.57967/hf/5340 }, publisher = { Hugging Face } } ``` If you use or modify [the software](https://github.com/BramVanroy/CommonCrawl-CreativeCommons), please cite: ```bibtex @software{Vanroy_CommonCrawl-CreativeCommons_2025, author = {Vanroy, Bram}, license = {GPL-3.0}, month = feb, title = {{CommonCrawl-CreativeCommons}}, url = {https://github.com/BramVanroy/CommonCrawl-CreativeCommons}, version = {1.3.0}, year = {2025} } ``` ## Acknowledgments - The [Common Crawl](https://commoncrawl.org/) non-profit organization. - [TNO](https://www.tno.nl/nl/), who funded the work hours to accomplish this code. They intend to use (parts of) [the generated material](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons) for the [GPT-NL project](https://gpt-nl.nl/). - [Flemish Supercomputer Center](https://www.vscentrum.be/) for part of the compute under grant 2024-107 - Guilherme Penedo ([@guipenedo](https://huggingface.co/guipenedo)) and the rest of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [datatrove](https://github.com/huggingface/datatrove) team for the help and insights - ML6 and specifically Robin Van Craenenbroek for their [Fondant Creative Commons](https://github.com/ml6team/fondant-usecase-filter-creative-commons/tree/add-fondant-usecase-cc-image-extraction) filter for image datasets. While my approach is different, their code did serve as inspiration.
LuftmenschPose/sub-news-sapo
LuftmenschPose
2025-05-04T08:58:16Z
0
0
[ "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T08:53:54Z
null
--- license: apache-2.0 ---
jlpang888/ultrafeedback_identical_pairs_7387_revised
jlpang888
2025-05-04T08:52:42Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T08:52:38Z
null
--- dataset_info: features: - name: prompt dtype: string - name: prompt_id dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: score_chosen dtype: float64 - name: score_rejected dtype: float64 splits: - name: train num_bytes: 42241023 num_examples: 7387 - name: test num_bytes: 13161585 num_examples: 2000 download_size: 30774600 dataset_size: 55402608 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
Yuyeong/rw_cora_mdlr_1_mask_public
Yuyeong
2025-05-04T08:29:16Z
0
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T08:28:51Z
null
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' '3': '3' '4': '4' '5': '5' '6': '6' - name: group_idx dtype: int64 - name: node_idx dtype: int64 splits: - name: train num_bytes: 20069274.933530282 num_examples: 14000 - name: validation num_bytes: 71675981.90546529 num_examples: 50000 - name: test num_bytes: 143351963.81093058 num_examples: 100000 download_size: 101290106 dataset_size: 235097220.64992616 --- # Dataset Card for "rw_cora_mdlr_1_mask_public" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
seungheondoh/socialfx-cls-eval
seungheondoh
2025-05-04T08:09:00Z
0
0
[ "region:us" ]
[]
2025-05-04T08:08:56Z
null
--- dataset_info: features: - name: input dtype: string - name: output sequence: string - name: binary sequence: int64 - name: labels sequence: string splits: - name: eq num_bytes: 66726 num_examples: 372 - name: reverb num_bytes: 1429379 num_examples: 3833 download_size: 50308 dataset_size: 1496105 configs: - config_name: default data_files: - split: eq path: data/eq-* - split: reverb path: data/reverb-* ---
alexchilton/nanobody-contact-maps
alexchilton
2025-05-04T08:00:01Z
0
0
[ "task_categories:text-generation", "license:mit", "region:us", "protein", "contact-map", "structure", "nanobody" ]
[ "text-generation", "image-generation" ]
2025-05-04T07:59:34Z
null
--- license: mit task_categories: - text-generation - image-generation tags: - protein - contact-map - structure - nanobody --- # Protein Contact Map Dataset ## Dataset Description This dataset contains protein structures with contact maps and related information from nanobody sequences. ### Dataset Summary - **Number of proteins:** 2992 - **Source:** Nanobody protein structures (nanos_networkx_small) - **Created by:** alexchilton - **Date:** 2025-05-04 ### Dataset Structure Each protein entry contains: - `amino_acid_sequence`: List of amino acid names - `length`: Number of residues - `c_alpha_coordinates`: List of [x,y,z] coordinates for C-alpha atoms - `distance_matrix`: Pairwise distance matrix between C-alpha atoms - `contact_maps`: List of binary contact maps with different distance thresholds - `contact_map_configs`: Configuration for each contact map (lower/upper bounds) ### Usage ```python from datasets import load_dataset dataset = load_dataset("alexchilton/nanobody-contact-maps") # Access a protein protein = dataset['train'][0] print(f"Length: {protein['length']}") print(f"First 10 residues: {protein['amino_acid_sequence'][:10]}") ``` ### Citation If you use this dataset, please cite: ``` @dataset{protein_contact_maps, title={Nanobody Protein Contact Map Dataset}, author={Alex Chilton}, year={2025}, url={https://huggingface.co/datasets/alexchilton/nanobody-contact-maps} } ```
Kgshop/Aikas
Kgshop
2025-05-04T07:33:30Z
0
0
[ "license:apache-2.0", "region:us" ]
[]
2025-05-04T06:37:57Z
null
--- license: apache-2.0 ---
rlawltjd/korean-nl2bash
rlawltjd
2025-05-04T07:04:23Z
0
0
[ "region:us" ]
[]
2025-05-04T07:04:15Z
null
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 1170873 num_examples: 8089 download_size: 448025 dataset_size: 1170873 configs: - config_name: default data_files: - split: train path: data/train-* ---
arielcerdap/tts-disfluencies-DA
arielcerdap
2025-05-04T06:55:17Z
0
0
[ "region:us" ]
[]
2025-05-04T06:54:53Z
null
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 44100 - name: text dtype: string - name: speaker_tag_used dtype: string - name: temperature_used dtype: float32 splits: - name: train num_bytes: 515151319.0 num_examples: 500 download_size: 499156183 dataset_size: 515151319.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
f1rdavs/tajik_lemmas
f1rdavs
2025-05-04T06:50:14Z
0
0
[ "license:apache-2.0", "size_categories:10K<n<100K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T06:46:55Z
null
--- license: apache-2.0 ---
Hkang/summarize_sft-test_lm-pythia1b-oai-summary-PPO-0KL-newrm_12K_seed-42_numex-250
Hkang
2025-05-04T06:48:34Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T06:48:33Z
null
--- dataset_info: features: - name: id dtype: string - name: subreddit dtype: string - name: title dtype: string - name: post dtype: string - name: summary dtype: string - name: query_input_ids sequence: int64 - name: query_attention_mask sequence: int64 - name: query dtype: string - name: reference_response dtype: string - name: reference_response_input_ids sequence: int64 - name: reference_response_attention_mask sequence: int64 - name: reference_response_token_len dtype: int64 - name: query_reference_response dtype: string - name: query_reference_response_input_ids sequence: int64 - name: query_reference_response_attention_mask sequence: int64 - name: query_reference_response_token_response_label sequence: int64 - name: query_reference_response_token_len dtype: int64 - name: model_response dtype: string splits: - name: test num_bytes: 6868072 num_examples: 250 download_size: 1158476 dataset_size: 6868072 configs: - config_name: default data_files: - split: test path: data/test-* ---
upvantage/deberta-1m-v2humanized
upvantage
2025-05-04T06:39:20Z
0
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T06:30:44Z
null
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': human '1': ai - name: type dtype: string splits: - name: train num_bytes: 1777430766 num_examples: 910928 - name: validation num_bytes: 197496555 num_examples: 101214 download_size: 1192474307 dataset_size: 1974927321 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
gabrielbo/mmlu-pro-baseline-scored
gabrielbo
2025-05-04T06:22:38Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "modality:timeseries", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T23:40:15Z
null
--- dataset_info: features: - name: question_index dtype: int64 - name: question dtype: string - name: options dtype: string - name: category dtype: string - name: correct_answer dtype: string - name: samples sequence: string - name: GRMLlama32_scores sequence: float32 - name: GRMLlama32_scores_normalized sequence: float32 - name: OffsetBias_scores sequence: float32 - name: OffsetBias_scores_normalized sequence: float32 - name: GRM_scores sequence: float32 - name: GRM_scores_normalized sequence: float32 - name: Skyworks_scores sequence: float32 - name: Skyworks_scores_normalized sequence: float32 - name: URM_scores sequence: float32 - name: URM_scores_normalized sequence: float32 - name: QRM_scores sequence: float32 - name: QRM_scores_normalized sequence: float32 - name: GPM_scores sequence: float32 - name: GPM_scores_normalized sequence: float32 - name: GRMGemma_scores sequence: float32 - name: GRMGemma_scores_normalized sequence: float32 - name: ArmorRM_scores sequence: float32 - name: ArmorRM_scores_normalized sequence: float32 - name: InternLM2Reward7B_scores sequence: float32 - name: InternLM2Reward7B_scores_normalized sequence: float32 - name: DecisionTreeReward8B_scores sequence: float32 - name: DecisionTreeReward8B_scores_normalized sequence: float32 splits: - name: train num_bytes: 13666177 num_examples: 87 download_size: 4357215 dataset_size: 13666177 configs: - config_name: default data_files: - split: train path: data/train-* ---
flyingbugs/OpenR1-Math-220k-pruned-keep-0.75-end-start-0.0
flyingbugs
2025-05-04T05:15:27Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T05:14:21Z
null
--- dataset_info: features: - name: problem dtype: string - name: solution dtype: string - name: answer dtype: string - name: problem_type dtype: string - name: question_type dtype: string - name: source dtype: string - name: uuid dtype: string - name: is_reasoning_complete sequence: bool - name: generations sequence: string - name: correctness_math_verify sequence: bool - name: correctness_llama sequence: bool - name: finish_reasons sequence: string - name: correctness_count dtype: int64 - name: messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 4693668410 num_examples: 93733 download_size: 2033374084 dataset_size: 4693668410 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_0_for_gen_16
HungVu2003
2025-05-04T03:38:07Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T03:38:06Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 7468487 num_examples: 12500 download_size: 1913775 dataset_size: 7468487 configs: - config_name: default data_files: - split: train path: data/train-* ---
ma921/oasst1-filtered
ma921
2025-05-04T03:04:05Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T03:04:03Z
null
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 36308251.51736613 num_examples: 16419 - name: test num_bytes: 1922678.4789915967 num_examples: 872 download_size: 18327886 dataset_size: 38230929.99635773 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
ma921/golden-hh-filtered
ma921
2025-05-04T02:34:45Z
0
0
[ "region:us" ]
[]
2025-05-04T02:34:39Z
null
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 5456491.693568413 num_examples: 12066 - name: test num_bytes: 293178.0 num_examples: 654 download_size: 3407868 dataset_size: 5749669.693568413 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
mlfoundations-dev/mix_avg_domain
mlfoundations-dev
2025-05-04T01:58:57Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T01:54:40Z
null
--- dataset_info: features: - name: instruction_seed dtype: string - name: _source dtype: string - name: gpt41_mini_response dtype: string - name: __original_row_idx dtype: int64 - name: length dtype: int64 - name: domain dtype: string - name: r1_response dtype: string - name: r1_reasoning_content dtype: string - name: extract_solution dtype: string - name: url dtype: string - name: filename dtype: string - name: success dtype: bool - name: page_count dtype: int64 - name: page_number dtype: int64 - name: question_choices_solutions dtype: string - name: extracted_question dtype: string - name: extracted_answer_choices sequence: string - name: matched_solution dtype: string - name: qa_validation_outputs dtype: bool - name: classifier_reasoning dtype: string - name: is_organic_chemistry dtype: bool - name: ms_id dtype: int64 - name: reasoning dtype: string - name: deepseek_solution dtype: string - name: final_reasoning_trace dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: id dtype: string - name: output dtype: string - name: source dtype: string - name: license dtype: string - name: dataset dtype: string - name: split dtype: string - name: difficulty dtype: int64 - name: solution dtype: string - name: index dtype: string - name: difficulty_reasoning dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: response_seed dtype: string splits: - name: train num_bytes: 12328252550.0 num_examples: 94797 download_size: 5254951315 dataset_size: 12328252550.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
ParkSY/data_nerf_oorg_style_anything_depthmap_normalmap
ParkSY
2025-05-04T00:33:09Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T00:33:04Z
null
--- dataset_info: features: - name: input_image dtype: string - name: edit_prompt dtype: string - name: edited_image dtype: string - name: label dtype: int64 - name: depthmap dtype: string - name: normal_map dtype: string splits: - name: train num_bytes: 695784 num_examples: 1638 download_size: 70543 dataset_size: 695784 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_1_for_gen_3_v2
HungVu2003
2025-05-04T00:23:27Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T00:23:16Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 2007819 num_examples: 13750 download_size: 1031137 dataset_size: 2007819 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_1_for_gen_3_v2
HungVu2003
2025-05-04T00:13:39Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T00:13:37Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 6422583 num_examples: 13750 download_size: 3279322 dataset_size: 6422583 configs: - config_name: default data_files: - split: train path: data/train-* ---
edwindn/voice_cloning_finetune_0.1
edwindn
2025-05-04T00:06:08Z
0
0
[ "size_categories:n<1K", "format:parquet", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-04T00:06:03Z
null
--- dataset_info: features: - name: input_ids sequence: int32 - name: speaker_embedding sequence: sequence: sequence: float32 splits: - name: train num_bytes: 6258356 num_examples: 338 download_size: 3471679 dataset_size: 6258356 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_2_for_gen_16
HungVu2003
2025-05-03T23:58:56Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T23:58:55Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 5074613 num_examples: 12500 download_size: 1290108 dataset_size: 5074613 configs: - config_name: default data_files: - split: train path: data/train-* ---
Triangle104/jondurbin_gutenberg-dpo-v0.1
Triangle104
2025-05-03T22:42:58Z
0
0
[ "language:en", "license:cc-by-4.0", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "dpo" ]
[]
2025-05-03T22:42:58Z
null
--- license: cc-by-4.0 language: - en tags: - dpo pretty_name: Gutenberg DPO size_categories: - n<1K --- # Gutenberg DPO ![gutenberg](gutenberg.png) ## Overview This is a dataset meant to enhance novel writing capabilities of LLMs, by using public domain books from [Project Gutenberg](https://gutenberg.org/) ## Process First, the each book is parsed, split into chapters, cleaned up from the original format (remove superfluous newlines, illustration tags, etc.). Once we have chapters, an LLM is prompted with each chapter to create a synthetic prompt that would result in that chapter being written. Each chapter has a summary created as well, so that the prompts for each chapter after the also include a summary of the previous chapter to provide additional context. We then use the synthetic prompt with previous chapter summary to write the chapter with an LLM (llama-2-13b-chat, bagel-7b-v0.1, dolphin-2.2-34b). The human written text, that is, the original chapter, is used as the "chosen" value, and the LLM written chapter is used as the rejected value. ## Books used These books were chosen main because they appeared in the popular section on project gutenberg, and they function correctly with the chapterize library. - Huckleberry Finn - Treasure Island - Anna Karenina - Uncle Tom’s Cabin - Wuthering Heights - Madame Bovary - The Turn of the Screw - The War of the Worlds - A Study in Scarlet - Middlemarch - Pride and Prejudice - The Brothers Karamazov - Through the Looking Glass - Moby Dick - Frankenstein - A Tale of Two Cities
AJ97/dd
AJ97
2025-05-03T22:31:37Z
0
0
[ "license:apache-2.0", "region:us" ]
[]
2025-05-03T22:31:37Z
null
--- license: apache-2.0 ---
mlfoundations-dev/e1_science_ms_qwq
mlfoundations-dev
2025-05-03T21:54:45Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T21:54:23Z
null
--- dataset_info: features: - name: instruction_seed dtype: string - name: _source dtype: string - name: gpt41_mini_response dtype: string - name: __original_row_idx dtype: int64 - name: length dtype: int64 - name: domain dtype: string - name: r1_response dtype: string - name: r1_reasoning_content dtype: string - name: extract_solution dtype: string - name: url dtype: string - name: filename dtype: string - name: success dtype: bool - name: page_count dtype: int64 - name: page_number dtype: int64 - name: question_choices_solutions dtype: string - name: extracted_question dtype: string - name: extracted_answer_choices sequence: string - name: matched_solution dtype: string - name: qa_validation_outputs dtype: bool - name: classifier_reasoning dtype: string - name: is_organic_chemistry dtype: bool - name: ms_id dtype: int64 - name: final_reasoning_trace dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: train num_bytes: 1399362722 num_examples: 31600 download_size: 414000478 dataset_size: 1399362722 configs: - config_name: default data_files: - split: train path: data/train-* ---
kothasuhas/llp-gold-37m-1.5m_clip0.004_T2048.0_I2048
kothasuhas
2025-05-03T21:38:44Z
0
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T21:37:17Z
null
--- dataset_info: features: - name: text dtype: string - name: p_log_probs dtype: float32 - name: q_log_probs dtype: float32 - name: num_tokens dtype: float32 - name: log_weight dtype: float64 splits: - name: train num_bytes: 3605804917.0 num_examples: 1500000 download_size: 760552746 dataset_size: 3605804917.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
nikhilchandak/MATH_mc
nikhilchandak
2025-05-03T21:25:49Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T21:25:47Z
null
--- dataset_info: features: - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: Answer dtype: string - name: Question dtype: string - name: Level dtype: string - name: Type dtype: string - name: Question_ID dtype: int64 splits: - name: test num_bytes: 1384054 num_examples: 4914 download_size: 688608 dataset_size: 1384054 configs: - config_name: default data_files: - split: test path: data/test-* ---
dgambettaphd/D_llm2_gen9_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-05-03T20:53:46Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T20:53:43Z
null
--- dataset_info: features: - name: id_doc dtype: int64 - name: text dtype: string - name: dataset dtype: string - name: gen dtype: int64 - name: synt dtype: int64 - name: MPP dtype: float64 splits: - name: train num_bytes: 14128528 num_examples: 25000 download_size: 8113071 dataset_size: 14128528 configs: - config_name: default data_files: - split: train path: data/train-* ---
anonymousEcaiHateLLM/lgb_data_2_label
anonymousEcaiHateLLM
2025-05-03T20:43:27Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T20:43:25Z
null
--- dataset_info: features: - name: text dtype: string - name: label_id dtype: int64 - name: language dtype: string - name: unsloth/Qwen2.5-14B-Instruct-bnb-4bit_label_1 dtype: float64 - name: unsloth/Qwen2.5-14B-Instruct-bnb-4bit_label_2 dtype: float64 - name: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit_label_1 dtype: float64 - name: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit_label_2 dtype: float64 - name: unsloth/gemma-2-9b-it-bnb-4bit_label_1 dtype: float64 - name: unsloth/gemma-2-9b-it-bnb-4bit_label_2 dtype: float64 - name: unsloth/mistral-7b-instruct-v0.3-bnb-4bit_label_1 dtype: float64 - name: unsloth/mistral-7b-instruct-v0.3-bnb-4bit_label_2 dtype: float64 - name: ds dtype: string splits: - name: lgb_data_2_label num_bytes: 13127110 num_examples: 63512 download_size: 6141595 dataset_size: 13127110 configs: - config_name: default data_files: - split: lgb_data_2_label path: data/lgb_data_2_label-* ---
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_1_for_gen_9_v2
HungVu2003
2025-05-03T20:15:56Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T20:15:55Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 6615412 num_examples: 12500 download_size: 3365999 dataset_size: 6615412 configs: - config_name: default data_files: - split: train path: data/train-* ---
KBayoud/Darija-VLM-Dataset-VQA-V1.0
KBayoud
2025-05-03T20:15:26Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T19:34:49Z
null
--- dataset_info: features: - name: image dtype: image - name: question dtype: string - name: answer dtype: string - name: source dtype: string splits: - name: train num_bytes: 242695615.94 num_examples: 3780 download_size: 242417809 dataset_size: 242695615.94 configs: - config_name: default data_files: - split: train path: data/train-* ---
Eluza133/Z1e1u
Eluza133
2025-05-03T20:01:52Z
530
0
[ "license:apache-2.0", "region:us" ]
[]
2025-03-08T07:48:07Z
null
--- license: apache-2.0 ---
mteb/banking77
mteb
2025-05-03T20:01:44Z
6,306
3
[ "task_categories:text-classification", "annotations_creators:human-annotated", "multilinguality:monolingual", "language:eng", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2003.04807", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2022-05-17T12:14:06Z
null
--- annotations_creators: - human-annotated language: - eng license: mit multilinguality: monolingual task_categories: - text-classification task_ids: [] tags: - mteb - text configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 715028 num_examples: 10003 - name: test num_bytes: 204010 num_examples: 3080 download_size: 379134 dataset_size: 919038 --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">Banking77Classification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> Dataset composed of online banking queries annotated with their corresponding intents. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Written | | Reference | https://arxiv.org/abs/2003.04807 | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["Banking77Classification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{casanueva-etal-2020-efficient, address = {Online}, author = {Casanueva, I{\~n}igo and Tem{\v{c}}inas, Tadas and Gerz, Daniela and Henderson, Matthew and Vuli{\'c}, Ivan}, booktitle = {Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI}, doi = {10.18653/v1/2020.nlp4convai-1.5}, editor = {Wen, Tsung-Hsien and Celikyilmaz, Asli and Yu, Zhou and Papangelis, Alexandros and Eric, Mihail and Kumar, Anuj and Casanueva, I{\~n}igo and Shah, Rushin}, month = jul, pages = {38--45}, publisher = {Association for Computational Linguistics}, title = {Efficient Intent Detection with Dual Sentence Encoders}, url = {https://aclanthology.org/2020.nlp4convai-1.5}, year = {2020}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("Banking77Classification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 3080, "number_of_characters": 167036, "number_texts_intersect_with_train": 0, "min_text_length": 13, "average_text_length": 54.23246753246753, "max_text_length": 368, "unique_text": 3080, "unique_labels": 77, "labels": { "11": { "count": 40 }, "13": { "count": 40 }, "32": { "count": 40 }, "17": { "count": 40 }, "34": { "count": 40 }, "46": { "count": 40 }, "36": { "count": 40 }, "12": { "count": 40 }, "4": { "count": 40 }, "14": { "count": 40 }, "33": { "count": 40 }, "41": { "count": 40 }, "1": { "count": 40 }, "49": { "count": 40 }, "23": { "count": 40 }, "56": { "count": 40 }, "47": { "count": 40 }, "8": { "count": 40 }, "60": { "count": 40 }, "75": { "count": 40 }, "15": { "count": 40 }, "66": { "count": 40 }, "54": { "count": 40 }, "40": { "count": 40 }, "10": { "count": 40 }, "61": { "count": 40 }, "6": { "count": 40 }, "16": { "count": 40 }, "30": { "count": 40 }, "74": { "count": 40 }, "68": { "count": 40 }, "38": { "count": 40 }, "73": { "count": 40 }, "62": { "count": 40 }, "29": { "count": 40 }, "22": { "count": 40 }, "3": { "count": 40 }, "28": { "count": 40 }, "44": { "count": 40 }, "26": { "count": 40 }, "45": { "count": 40 }, "42": { "count": 40 }, "52": { "count": 40 }, "27": { "count": 40 }, "51": { "count": 40 }, "25": { "count": 40 }, "48": { "count": 40 }, "55": { "count": 40 }, "18": { "count": 40 }, "63": { "count": 40 }, "70": { "count": 40 }, "67": { "count": 40 }, "53": { "count": 40 }, "21": { "count": 40 }, "7": { "count": 40 }, "64": { "count": 40 }, "50": { "count": 40 }, "35": { "count": 40 }, "65": { "count": 40 }, "71": { "count": 40 }, "39": { "count": 40 }, "58": { "count": 40 }, "43": { "count": 40 }, "72": { "count": 40 }, "76": { "count": 40 }, "37": { "count": 40 }, "59": { "count": 40 }, "5": { "count": 40 }, "20": { "count": 40 }, "31": { "count": 40 }, "57": { "count": 40 }, "0": { "count": 40 }, "19": { "count": 40 }, "9": { "count": 40 }, "2": { "count": 40 }, "69": { "count": 40 }, "24": { "count": 40 } } }, "train": { "num_samples": 10003, "number_of_characters": 594916, "number_texts_intersect_with_train": null, "min_text_length": 13, "average_text_length": 59.47375787263821, "max_text_length": 433, "unique_text": 10003, "unique_labels": 77, "labels": { "11": { "count": 153 }, "13": { "count": 139 }, "32": { "count": 112 }, "17": { "count": 167 }, "34": { "count": 166 }, "46": { "count": 143 }, "36": { "count": 126 }, "12": { "count": 112 }, "4": { "count": 127 }, "14": { "count": 112 }, "33": { "count": 118 }, "41": { "count": 82 }, "1": { "count": 110 }, "49": { "count": 115 }, "23": { "count": 35 }, "56": { "count": 111 }, "47": { "count": 149 }, "8": { "count": 157 }, "60": { "count": 97 }, "75": { "count": 180 }, "15": { "count": 187 }, "66": { "count": 171 }, "54": { "count": 129 }, "40": { "count": 98 }, "10": { "count": 59 }, "61": { "count": 146 }, "6": { "count": 181 }, "16": { "count": 168 }, "30": { "count": 121 }, "74": { "count": 121 }, "68": { "count": 102 }, "38": { "count": 106 }, "73": { "count": 135 }, "62": { "count": 103 }, "29": { "count": 121 }, "22": { "count": 86 }, "3": { "count": 87 }, "28": { "count": 182 }, "44": { "count": 105 }, "26": { "count": 173 }, "45": { "count": 159 }, "42": { "count": 121 }, "52": { "count": 169 }, "27": { "count": 133 }, "51": { "count": 162 }, "25": { "count": 153 }, "48": { "count": 148 }, "55": { "count": 108 }, "18": { "count": 61 }, "63": { "count": 175 }, "70": { "count": 113 }, "67": { "count": 128 }, "53": { "count": 161 }, "21": { "count": 122 }, "7": { "count": 156 }, "64": { "count": 172 }, "50": { "count": 95 }, "35": { "count": 137 }, "65": { "count": 113 }, "71": { "count": 126 }, "39": { "count": 129 }, "58": { "count": 114 }, "43": { "count": 120 }, "72": { "count": 41 }, "76": { "count": 163 }, "37": { "count": 97 }, "59": { "count": 145 }, "5": { "count": 171 }, "20": { "count": 160 }, "31": { "count": 121 }, "57": { "count": 114 }, "0": { "count": 159 }, "19": { "count": 177 }, "9": { "count": 129 }, "2": { "count": 126 }, "69": { "count": 104 }, "24": { "count": 129 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
Svngoku/CheickAntaDiopOriginOfCivilization
Svngoku
2025-05-03T19:57:11Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T19:57:09Z
null
--- dataset_info: features: - name: chunk_id dtype: string - name: text dtype: string - name: metadata struct: - name: Header 1 dtype: string - name: Header 2 dtype: string - name: image_references sequence: string - name: images_base64 sequence: string - name: start_index dtype: int64 - name: source_filename dtype: string splits: - name: train num_bytes: 12661743 num_examples: 974 download_size: 9905404 dataset_size: 12661743 configs: - config_name: default data_files: - split: train path: data/train-* ---
cchoi1/kodcode-complete_1000_qwen7b_sol_iter0_att10_sol5_debug
cchoi1
2025-05-03T19:55:36Z
18
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-28T02:31:14Z
null
--- dataset_info: features: - name: mutation_id dtype: int64 - name: task_id dtype: string - name: mutator_prompt dtype: string - name: solver_prompt dtype: string - name: response dtype: string - name: mutation_explanation dtype: string - name: mutation_info dtype: string - name: mutator_score dtype: float64 - name: solution_scores dtype: string - name: solutions dtype: string - name: solutions_explanation dtype: string - name: solutions_info dtype: string splits: - name: train num_bytes: 19396 num_examples: 3 download_size: 39597 dataset_size: 19396 configs: - config_name: default data_files: - split: train path: data/train-* ---
Bretagne/wikiann_br
Bretagne
2025-05-03T19:40:12Z
21
0
[ "task_categories:token-classification", "language:br", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "token-classification" ]
2024-10-16T21:21:02Z
null
--- dataset_info: features: - name: tokens sequence: string - name: ner_tags sequence: int64 splits: - name: train num_bytes: 127019 num_examples: 915 - name: validation num_bytes: 121393 num_examples: 946 - name: test num_bytes: 130972 num_examples: 952 download_size: 120493 dataset_size: 379384 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* language: br task_categories: - token-classification --- ### Description Version nettoyée de [WikiAnn](https://huggingface.co/datasets/tner/wikiann). En effet, la version originale contenait des leaks et des duplications. De 1000 effectifs par split, la nouvelle répartition devient alors la suivante : ``` DatasetDict({ train: Dataset({ features: ['tokens', 'ner_tags'], num_rows: 915 }) validation: Dataset({ features: ['tokens', 'ner_tags'], num_rows: 946 }) test: Dataset({ features: ['tokens', 'ner_tags'], num_rows: 952 }) }) ``` ### Label ID Le dictionnaire label2id est disponible [ici](https://huggingface.co/datasets/tner/wikiann/raw/main/dataset/label.json). ```python { "B-LOC": 0, "B-ORG": 1, "B-PER": 2, "I-LOC": 3, "I-ORG": 4, "I-PER": 5, "O": 6 } ```
Samarth0710/neurips-2024-peer-reviews
Samarth0710
2025-05-03T18:22:43Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T18:14:28Z
null
--- dataset_info: features: - name: paper_id dtype: string - name: title dtype: string - name: abstract dtype: string - name: pdf_url dtype: string - name: reviews list: - name: confidence dtype: int64 - name: rating dtype: int64 - name: review_id dtype: string - name: review_text dtype: string splits: - name: train num_bytes: 50663353 num_examples: 4236 download_size: 26840387 dataset_size: 50663353 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_1_for_gen_16_v2
HungVu2003
2025-05-03T18:10:38Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T18:10:36Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 819598 num_examples: 12500 download_size: 566940 dataset_size: 819598 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_1_for_gen_9_v2
HungVu2003
2025-05-03T17:33:25Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T17:33:23Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 820906 num_examples: 12500 download_size: 567597 dataset_size: 820906 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_0_for_gen_8_v2
HungVu2003
2025-05-03T17:28:03Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T17:28:01Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 815984 num_examples: 12500 download_size: 564583 dataset_size: 815984 configs: - config_name: default data_files: - split: train path: data/train-* ---
dopaul/simple_pawn_move
dopaul
2025-05-03T17:05:46Z
0
0
[ "task_categories:robotics", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "phosphobot", "so100", "phospho-dk" ]
[ "robotics" ]
2025-05-03T16:58:26Z
null
--- tags: - phosphobot - so100 - phospho-dk task_categories: - robotics --- # simple_pawn_move **This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).** This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
labofsahil/aws-pricing-dataset
labofsahil
2025-05-03T17:00:00Z
231
0
[ "language:en", "license:mit", "size_categories:1M<n<10M", "region:us", "finance", "aws", "pricing" ]
[]
2024-10-22T17:54:07Z
null
--- license: mit language: - en tags: - finance - aws - pricing pretty_name: AWS Pricing Dataset size_categories: - 1M<n<10M configs: - config_name: EC2 data_files: - split: EC2 path: AmazonEC2.csv --- The following data is pulled from AWS official pricing API. Contains all pricing data across AWS services Source: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/using-price-list-query-api.html Update Frequency: Gets auto updated weekly
gunnybd01/Consumer_smr
gunnybd01
2025-05-03T16:58:57Z
0
0
[ "region:us" ]
[]
2025-05-02T15:21:02Z
null
--- dataset_info: features: - name: Keys dtype: string - name: reports dtype: string - name: labels dtype: string splits: - name: train num_bytes: 143855291 num_examples: 60000 download_size: 8560814 dataset_size: 143855291 configs: - config_name: default data_files: - split: train path: data/train-* ---
xbilek25/hall_train_36000
xbilek25
2025-05-03T16:41:35Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T16:38:01Z
null
--- dataset_info: features: - name: client_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: sentence dtype: string - name: up_votes dtype: int64 - name: down_votes dtype: int64 - name: age dtype: string - name: gender dtype: string - name: accent dtype: string - name: locale dtype: string - name: segment dtype: string - name: variant dtype: string splits: - name: train num_bytes: 7175502581.0 num_examples: 36000 download_size: 6026324637 dataset_size: 7175502581.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
amekerishvili/ATCO2_full_files
amekerishvili
2025-05-03T16:14:01Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T12:01:28Z
null
--- dataset_info: features: - name: ID dtype: string - name: audio_file dtype: string - name: start_time dtype: float64 - name: end_time dtype: float64 - name: airport dtype: string - name: channel dtype: string - name: frequency dtype: string - name: time dtype: string - name: waypoints dtype: string - name: callsigns dtype: string - name: ground_truth_raw dtype: string - name: ground_truth dtype: string - name: non_Eng_ground_truth dtype: string - name: tags dtype: string - name: values_tags dtype: string - name: commands_tags dtype: string - name: callsigns_tags dtype: string - name: unnamed_tags dtype: string splits: - name: train num_bytes: 1558206 num_examples: 612 - name: validation num_bytes: 362174 num_examples: 136 - name: test num_bytes: 356108 num_examples: 129 download_size: 397317 dataset_size: 2276488 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
Neelectric/OpenR1-Math-220k_CN-K12_OLMo-2_4096toks
Neelectric
2025-05-03T16:01:00Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T16:00:07Z
null
--- dataset_info: features: - name: problem dtype: string - name: solution dtype: string - name: answer dtype: string - name: problem_type dtype: string - name: question_type dtype: string - name: source dtype: string - name: uuid dtype: string - name: is_reasoning_complete sequence: bool - name: generations sequence: string - name: correctness_math_verify sequence: bool - name: correctness_llama sequence: bool - name: finish_reasons sequence: string - name: correctness_count dtype: int64 - name: messages list: - name: content dtype: string - name: role dtype: string - name: tokenized sequence: sequence: int64 splits: - name: train num_bytes: 3828714350.224059 num_examples: 69132 download_size: 826750631 dataset_size: 3828714350.224059 configs: - config_name: default data_files: - split: train path: data/train-* ---
TIMBER-Lab/Qwen2.5-7B-Instruct-Turbo_labeled_numina_difficulty_162K_10_selected
TIMBER-Lab
2025-05-03T15:55:01Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T07:39:42Z
null
--- dataset_info: features: - name: ids dtype: int64 - name: queries dtype: string - name: samples sequence: string - name: references dtype: string splits: - name: train num_bytes: 183380515 num_examples: 7061 download_size: 62088397 dataset_size: 183380515 configs: - config_name: default data_files: - split: train path: data/train-* ---
FrancophonIA/Glossaire_pilotes_et_personne_services_circulation_aerienne
FrancophonIA
2025-05-03T15:41:36Z
0
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-05-03T15:40:52Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://publications.gc.ca/site/eng/9.693563/publication.html
FrancophonIA/Glossaire_procedure_parlementaire
FrancophonIA
2025-05-03T15:37:24Z
0
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-05-03T15:36:48Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://publications.gc.ca/site/eng/9.693563/publication.html
yalhessi/lemexp-task1-v2
yalhessi
2025-05-03T15:17:12Z
105
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-28T22:30:08Z
null
--- dataset_info: - config_name: lemma_object_afp features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 524575870 num_examples: 189443 - name: valid num_bytes: 1377280 num_examples: 500 - name: test num_bytes: 34512980 num_examples: 16362 download_size: 175981715 dataset_size: 560466130 - config_name: lemma_object_full features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 662453650 num_examples: 247566 - name: valid num_bytes: 1363645 num_examples: 500 - name: test num_bytes: 50001751 num_examples: 21102 download_size: 222852119 dataset_size: 713819046 - config_name: lemma_object_small features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 159914495 num_examples: 57623 - name: valid num_bytes: 1392149 num_examples: 500 - name: test num_bytes: 20273201 num_examples: 4740 download_size: 49435654 dataset_size: 181579845 - config_name: lemma_object_small_2025-04-28 features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 136568417 num_examples: 57623 - name: valid num_bytes: 1295729 num_examples: 500 - name: test num_bytes: 15488772 num_examples: 4740 download_size: 40381187 dataset_size: 153352918 - config_name: lemma_object_small_nodefs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 112670558 num_examples: 57623 - name: valid num_bytes: 974837 num_examples: 500 - name: test num_bytes: 13066314 num_examples: 4740 download_size: 30896182 dataset_size: 126711709 - config_name: lemma_object_small_nodefs_old_defs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 102649572 num_examples: 57623 - name: valid num_bytes: 940414 num_examples: 500 - name: test num_bytes: 10838190 num_examples: 4740 download_size: 28305923 dataset_size: 114428176 - config_name: lemma_object_small_notypes features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 148589288 num_examples: 57623 - name: valid num_bytes: 1296144 num_examples: 500 - name: test num_bytes: 19256295 num_examples: 4740 download_size: 46926902 dataset_size: 169141727 - config_name: lemma_object_small_notypes_2025-04-28 features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 125243210 num_examples: 57623 - name: valid num_bytes: 1199724 num_examples: 500 - name: test num_bytes: 14471866 num_examples: 4740 download_size: 37823877 dataset_size: 140914800 - config_name: lemma_object_small_notypes_old_defs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 128619566 num_examples: 57623 - name: valid num_bytes: 1227828 num_examples: 500 - name: test num_bytes: 14805563 num_examples: 4740 download_size: 39200586 dataset_size: 144652957 - config_name: lemma_object_small_old_defs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 139944773 num_examples: 57623 - name: valid num_bytes: 1323833 num_examples: 500 - name: test num_bytes: 15822469 num_examples: 4740 download_size: 41752856 dataset_size: 157091075 - config_name: template_afp features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 518532175 num_examples: 189443 - name: valid num_bytes: 1362151 num_examples: 500 - name: test num_bytes: 34538912 num_examples: 16362 download_size: 170830966 dataset_size: 554433238 - config_name: template_full features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 652351629 num_examples: 247566 - name: valid num_bytes: 1342026 num_examples: 500 - name: test num_bytes: 49612749 num_examples: 21102 download_size: 215286609 dataset_size: 703306404 - config_name: template_small features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 158976157 num_examples: 57623 - name: valid num_bytes: 1384393 num_examples: 500 - name: test num_bytes: 20164971 num_examples: 4740 download_size: 48429833 dataset_size: 180525521 - config_name: template_small_2025-04-28 features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 132536969 num_examples: 57623 - name: valid num_bytes: 1262361 num_examples: 500 - name: test num_bytes: 15073838 num_examples: 4740 download_size: 38078669 dataset_size: 148873168 - config_name: template_small_nodefs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 111732220 num_examples: 57623 - name: valid num_bytes: 967081 num_examples: 500 - name: test num_bytes: 12958084 num_examples: 4740 download_size: 29890359 dataset_size: 125657385 - config_name: template_small_nodefs_old_defs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 101711234 num_examples: 57623 - name: valid num_bytes: 932658 num_examples: 500 - name: test num_bytes: 10729960 num_examples: 4740 download_size: 27300098 dataset_size: 113373852 - config_name: template_small_notypes features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 147650950 num_examples: 57623 - name: valid num_bytes: 1288388 num_examples: 500 - name: test num_bytes: 19148065 num_examples: 4740 download_size: 45921081 dataset_size: 168087403 - config_name: template_small_notypes_2025-04-28 features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 121211762 num_examples: 57623 - name: valid num_bytes: 1166356 num_examples: 500 - name: test num_bytes: 14056932 num_examples: 4740 download_size: 35521368 dataset_size: 136435050 - config_name: template_small_notypes_old_defs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 127681228 num_examples: 57623 - name: valid num_bytes: 1220072 num_examples: 500 - name: test num_bytes: 14697333 num_examples: 4740 download_size: 38194762 dataset_size: 143598633 - config_name: template_small_old_defs features: - name: theory_file dtype: string - name: lemma_name dtype: string - name: lemma_command dtype: string - name: lemma_object dtype: string - name: template dtype: string - name: symbols sequence: string - name: types sequence: string - name: defs sequence: string - name: output_key dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 139006435 num_examples: 57623 - name: valid num_bytes: 1316077 num_examples: 500 - name: test num_bytes: 15714239 num_examples: 4740 download_size: 40747035 dataset_size: 156036751 configs: - config_name: lemma_object_afp data_files: - split: train path: lemma_object_afp/train-* - split: valid path: lemma_object_afp/valid-* - split: test path: lemma_object_afp/test-* - config_name: lemma_object_full data_files: - split: train path: lemma_object_full/train-* - split: valid path: lemma_object_full/valid-* - split: test path: lemma_object_full/test-* - config_name: lemma_object_small data_files: - split: train path: lemma_object_small/train-* - split: valid path: lemma_object_small/valid-* - split: test path: lemma_object_small/test-* - config_name: lemma_object_small_2025-04-28 data_files: - split: train path: lemma_object_small_2025-04-28/train-* - split: valid path: lemma_object_small_2025-04-28/valid-* - split: test path: lemma_object_small_2025-04-28/test-* - config_name: lemma_object_small_nodefs data_files: - split: train path: lemma_object_small_nodefs/train-* - split: valid path: lemma_object_small_nodefs/valid-* - split: test path: lemma_object_small_nodefs/test-* - config_name: lemma_object_small_nodefs_old_defs data_files: - split: train path: lemma_object_small_nodefs_old_defs/train-* - split: valid path: lemma_object_small_nodefs_old_defs/valid-* - split: test path: lemma_object_small_nodefs_old_defs/test-* - config_name: lemma_object_small_notypes data_files: - split: train path: lemma_object_small_notypes/train-* - split: valid path: lemma_object_small_notypes/valid-* - split: test path: lemma_object_small_notypes/test-* - config_name: lemma_object_small_notypes_2025-04-28 data_files: - split: train path: lemma_object_small_notypes_2025-04-28/train-* - split: valid path: lemma_object_small_notypes_2025-04-28/valid-* - split: test path: lemma_object_small_notypes_2025-04-28/test-* - config_name: lemma_object_small_notypes_old_defs data_files: - split: train path: lemma_object_small_notypes_old_defs/train-* - split: valid path: lemma_object_small_notypes_old_defs/valid-* - split: test path: lemma_object_small_notypes_old_defs/test-* - config_name: lemma_object_small_old_defs data_files: - split: train path: lemma_object_small_old_defs/train-* - split: valid path: lemma_object_small_old_defs/valid-* - split: test path: lemma_object_small_old_defs/test-* - config_name: template_afp data_files: - split: train path: template_afp/train-* - split: valid path: template_afp/valid-* - split: test path: template_afp/test-* - config_name: template_full data_files: - split: train path: template_full/train-* - split: valid path: template_full/valid-* - split: test path: template_full/test-* - config_name: template_small data_files: - split: train path: template_small/train-* - split: valid path: template_small/valid-* - split: test path: template_small/test-* - config_name: template_small_2025-04-28 data_files: - split: train path: template_small_2025-04-28/train-* - split: valid path: template_small_2025-04-28/valid-* - split: test path: template_small_2025-04-28/test-* - config_name: template_small_nodefs data_files: - split: train path: template_small_nodefs/train-* - split: valid path: template_small_nodefs/valid-* - split: test path: template_small_nodefs/test-* - config_name: template_small_nodefs_old_defs data_files: - split: train path: template_small_nodefs_old_defs/train-* - split: valid path: template_small_nodefs_old_defs/valid-* - split: test path: template_small_nodefs_old_defs/test-* - config_name: template_small_notypes data_files: - split: train path: template_small_notypes/train-* - split: valid path: template_small_notypes/valid-* - split: test path: template_small_notypes/test-* - config_name: template_small_notypes_2025-04-28 data_files: - split: train path: template_small_notypes_2025-04-28/train-* - split: valid path: template_small_notypes_2025-04-28/valid-* - split: test path: template_small_notypes_2025-04-28/test-* - config_name: template_small_notypes_old_defs data_files: - split: train path: template_small_notypes_old_defs/train-* - split: valid path: template_small_notypes_old_defs/valid-* - split: test path: template_small_notypes_old_defs/test-* - config_name: template_small_old_defs data_files: - split: train path: template_small_old_defs/train-* - split: valid path: template_small_old_defs/valid-* - split: test path: template_small_old_defs/test-* ---
FrancophonIA/Le-vocabulaire-s-acclimate
FrancophonIA
2025-05-03T15:12:40Z
6
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-04-28T20:17:05Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Le-vocabulaire-s-acclimate
FrancophonIA/Vocabulaire-de-la-biologie-2017
FrancophonIA
2025-05-03T15:11:38Z
6
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-04-28T20:15:12Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-la-biologie-2017 ## Description La Délégation générale à la langue française et aux langues de France publie pour la première fois un Vocabulaire de la biologie : 611 termes et définitions concernant des notions nouvelles dont beaucoup n’avaient pas de désignation en français.
Maxscha/json-instruct-generation-large
Maxscha
2025-05-03T15:03:37Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T15:03:31Z
null
--- dataset_info: features: - name: schema dtype: string - name: input dtype: string - name: output dtype: string - name: task dtype: string splits: - name: train num_bytes: 99642713 num_examples: 50000 download_size: 31084332 dataset_size: 99642713 configs: - config_name: default data_files: - split: train path: data/train-* ---
FrancophonIA/Vous-pouvez-le-dire-en-francais-Si-tu-veux-la-Paix
FrancophonIA
2025-05-03T15:03:16Z
6
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-04-28T21:38:49Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vous-pouvez-le-dire-en-francais-Si-tu-veux-la-Paix
FrancophonIA/Vocabulaire-de-l-education-et-de-la-recherche-2013
FrancophonIA
2025-05-03T14:58:25Z
2
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-04-29T20:38:01Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-l-education-et-de-la-recherche-2013
FrancophonIA/Vocabulaire-des-sports-2011
FrancophonIA
2025-05-03T14:47:04Z
3
0
[ "task_categories:translation", "language:fra", "language:eng", "region:us" ]
[ "translation" ]
2025-04-29T20:47:59Z
null
--- language: - fra - eng viewer: false task_categories: - translation --- > [!NOTE] > Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-des-sports-2011
jaeyong2/Math-Qwen3-06B-Ko
jaeyong2
2025-05-03T14:32:35Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-02T16:11:26Z
null
--- dataset_info: features: - name: content dtype: string - name: response sequence: string splits: - name: train num_bytes: 384553697 num_examples: 2000 download_size: 124040666 dataset_size: 384553697 configs: - config_name: default data_files: - split: train path: data/train-* ---
Kamyar-zeinalipour/llama1b_kg
Kamyar-zeinalipour
2025-05-03T14:23:34Z
17
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-07T11:29:56Z
null
--- dataset_info: features: - name: cycle dtype: int64 - name: temperature dtype: float64 - name: top_p dtype: float64 - name: raw_generated_text dtype: string - name: extracted_output dtype: string - name: applied_template_text dtype: string - name: rouge_scores_text dtype: string - name: rouge_scores_triple dtype: string - name: rouge_l_fmeasure_text dtype: string - name: rouge_l_fmeasure_triple dtype: float64 - name: emb_similarity_text dtype: string - name: emb_similarity_triple dtype: float64 - name: combined_similarity_triple dtype: float64 - name: combined_similarity_text dtype: string - name: combined_similarity_triple_diff dtype: float64 - name: input_text dtype: string - name: initial_text dtype: string - name: source_file dtype: string - name: user_content dtype: string - name: assistant_output dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 13131003 num_examples: 1950 - name: test num_bytes: 337153 num_examples: 50 download_size: 5134114 dataset_size: 13468156 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
kothasuhas/llp-gold-37m-1.5m_T32768.0_I32768
kothasuhas
2025-05-03T14:18:00Z
0
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-03T14:17:12Z
null
--- dataset_info: features: - name: text dtype: string - name: p_log_probs dtype: float32 - name: q_log_probs dtype: float32 - name: num_tokens dtype: float32 - name: log_weight dtype: float64 splits: - name: train num_bytes: 3605804917.0 num_examples: 1500000 download_size: 1629512 dataset_size: 3605804917.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_5
SayantanJoker
2025-05-03T14:08:28Z
38
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-01T17:01:51Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 29654690917.29186 num_examples: 49839 download_size: 29581345443 dataset_size: 29654690917.29186 configs: - config_name: default data_files: - split: train path: data/train-* ---
anthonyav/so100-lego-v2
anthonyav
2025-05-03T14:08:02Z
126
0
[ "task_categories:robotics", "size_categories:n<1K", "modality:video", "library:datasets", "library:mlcroissant", "region:us", "phosphobot", "so100", "phospho-dk" ]
[ "robotics" ]
2025-04-27T10:25:55Z
null
--- tags: - phosphobot - so100 - phospho-dk task_categories: - robotics --- # so100-lego-v2 **This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).** This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
gaia-benchmark/results_public
gaia-benchmark
2025-05-03T14:03:54Z
2,615
14
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2023-10-31T16:03:44Z
null
--- dataset_info: config_name: '2023' features: - name: model dtype: string - name: model_family dtype: string - name: system_prompt dtype: string - name: url dtype: string - name: organisation dtype: string - name: score dtype: float64 - name: score_level1 dtype: float64 - name: score_level2 dtype: float64 - name: score_level3 dtype: float64 - name: date dtype: string splits: - name: validation num_bytes: 24245 num_examples: 75 - name: test num_bytes: 21842 num_examples: 102 download_size: 30216 dataset_size: 46087 configs: - config_name: '2023' data_files: - split: validation path: 2023/validation-* - split: test path: 2023/test-* --- # Dataset Card for "resultspublic" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alfonsusrr/DISC-Law-SFT-Alpaca
alfonsusrr
2025-05-03T13:29:36Z
79
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2309.11325", "region:us" ]
[]
2025-04-09T16:14:03Z
null
--- dataset_info: features: - name: id dtype: string - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 513113825 num_examples: 257201 - name: test num_bytes: 56839924 num_examples: 28580 download_size: 285914010 dataset_size: 569953749 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Processed DISC-Law-SFT Dataset (Alpaca Format) This repository provides a processed version of the [DISC-Law-SFT dataset](https://huggingface.co/datasets/ShengbinYue/DISC-Law-SFT) for easier usage in instruction tuning and aligned language model training. The dataset has been converted into the **Alpaca format**, which is commonly used for supervised fine-tuning of language models on instruction-following tasks. ## Dataset Description The original DISC-Law-SFT dataset was proposed for developing intelligent legal service systems with large language models. This processed version reorganizes the data into the Alpaca format: ```json { "instruction": "Instruction/question to the model", "input": "Optional context or additional input", "output": "Expected model response" } ``` The conversion makes it easier to fine-tune models like LLaMA, Mistral, or other instruction-following LLMs. ## Source Files The processed dataset is derived from the following files in the original DISC-Law-SFT dataset: - DISC-Law-SFT-Pair.jsonl - DISC-Law-SFT-Pair-QA-released.jsonl - DISC-Law-SFT-Triplet-released.jsonl - DISC-Law-SFT-Triplet-QA-released.jsonl These files contain pairs and triplets of legal questions and answers, manually annotated or curated for fine-tuning. ## Citation If you use this dataset or any derivative of DISC-Law-SFT, please cite the original authors: ```bibtex @misc{yue2023disclawllm, title={DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal Services}, author={Shengbin Yue and Wei Chen and Siyuan Wang and Bingxuan Li and Chenchen Shen and Shujun Liu and Yuxuan Zhou and Yao Xiao and Song Yun and Xuanjing Huang and Zhongyu Wei}, year={2023}, eprint={2309.11325}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{yue2024lawllm, title={LawLLM: Intelligent Legal System with Legal Reasoning and Verifiable Retrieval}, author={Yue, Shengbin and Liu, Shujun and Zhou, Yuxuan and Shen, Chenchen and Wang, Siyuan and Xiao, Yao and Li, Bingxuan and Song, Yun and Shen, Xiaoyu and Chen, Wei and others}, booktitle={International Conference on Database Systems for Advanced Applications}, pages={304--321}, year={2024}, organization={Springer} } ``` ## License Refer to the original dataset license for usage restrictions.