Dataset Viewer
datasetId
large_stringlengths 6
107
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-05 08:14:19
| downloads
int64 0
4.28M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-05 08:14:09
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
john-1111/x_dataset_0603159 | john-1111 | 2025-05-05T01:25:33Z | 278 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-25T07:17:19Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** john-1111/x_dataset_0603159
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5G1Mrxdg6y9yfDmFfYJUnfdedoRQuF52WcURHjbJvVFWr1Jj
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{john-11112025datauniversex_dataset_0603159,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={john-1111},
year={2025},
url={https://huggingface.co/datasets/john-1111/x_dataset_0603159},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 4558340
- **Date Range:** 2025-01-02T00:00:00Z to 2025-04-25T00:00:00Z
- **Last Updated:** 2025-05-05T01:25:33Z
### Data Distribution
- Tweets with hashtags: 3.54%
- Tweets without hashtags: 96.46%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 1261593 | 88.67% |
| 2 | #granhermano | 10301 | 0.72% |
| 3 | #riyadh | 9374 | 0.66% |
| 4 | #箱根駅伝 | 8147 | 0.57% |
| 5 | #thameposeriesep9 | 7605 | 0.53% |
| 6 | #tiktok | 6765 | 0.48% |
| 7 | #ad | 5367 | 0.38% |
| 8 | #zelena | 4878 | 0.34% |
| 9 | #smackdown | 4844 | 0.34% |
| 10 | #कबीर_परमेश्वर_निर्वाण_दिवस | 4843 | 0.34% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-25T07:14:13Z | 414446 | 414446 |
| 2025-01-25T07:14:44Z | 453526 | 867972 |
| 2025-01-25T07:15:15Z | 453526 | 1321498 |
| 2025-01-25T07:15:45Z | 453526 | 1775024 |
| 2025-01-25T07:16:15Z | 453526 | 2228550 |
| 2025-01-25T07:16:47Z | 453526 | 2682076 |
| 2025-01-25T07:17:17Z | 453526 | 3135602 |
| 2025-01-25T07:17:48Z | 453526 | 3589128 |
| 2025-02-18T03:40:33Z | 471834 | 4060962 |
| 2025-05-05T01:25:33Z | 497378 | 4558340 |
|
test-gen/mbpp_Qwen2.5-Coder-7B-Instruct_t0.0_n1_generated_tests | test-gen | 2025-05-04T22:50:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:50:13Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 602744
num_examples: 500
download_size: 233989
dataset_size: 602744
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
BasedLukas/so101_test_5 | BasedLukas | 2025-05-04T21:20:29Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-04T21:20:16Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 2,
"total_frames": 1793,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.webcam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ronenEl/Mistral-PRM-Data-for-ST-History-10 | ronenEl | 2025-05-04T20:45:08Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:44:45Z | null | ---
dataset_info:
features:
- name: prev_steps
dtype: string
- name: current_step
dtype: string
- name: label
dtype: int64
- name: problem_id
dtype: int64
- name: solution_id
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 815588193
num_examples: 909558
- name: validation
num_bytes: 95810881
num_examples: 110553
- name: test
num_bytes: 106356437
num_examples: 118839
download_size: 225288440
dataset_size: 1017755511
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.8_num-company_3_dataset_0_for_gen_2 | HungVu2003 | 2025-05-04T20:13:31Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:13:30Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 8023196
num_examples: 12498
download_size: 3346167
dataset_size: 8023196
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/RuToxicOKMLCUPClassification | mteb | 2025-05-04T16:31:06Z | 91 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:rus",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-04-19T13:26:57Z | null | ---
annotations_creators:
- derived
language:
- rus
license: unknown
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: toxic
dtype: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 345040
num_examples: 2000
- name: test
num_bytes: 323745
num_examples: 2000
download_size: 347393
dataset_size: 668785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">RuToxicOKMLCUPMultilabelClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
On the Odnoklassniki social network, users post a huge number of comments of various directions and nature every day.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | None |
| Reference | https://cups.online/ru/contests/okmlcup2020 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["RuToxicOKMLCUPMultilabelClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("RuToxicOKMLCUPMultilabelClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2000,
"number_of_characters": 152400,
"number_texts_intersect_with_train": 0,
"min_text_length": 6,
"average_text_length": 76.2,
"max_text_length": 790,
"unique_texts": 2000,
"min_labels_per_text": 1,
"average_label_per_text": 1.0885,
"max_labels_per_text": 3,
"unique_labels": 4,
"labels": {
"1": {
"count": 1000
},
"0": {
"count": 810
},
"3": {
"count": 275
},
"2": {
"count": 92
}
}
},
"train": {
"num_samples": 2000,
"number_of_characters": 163893,
"number_texts_intersect_with_train": null,
"min_text_length": 5,
"average_text_length": 81.9465,
"max_text_length": 965,
"unique_texts": 2000,
"min_labels_per_text": 1,
"average_label_per_text": 1.093,
"max_labels_per_text": 3,
"unique_labels": 4,
"labels": {
"1": {
"count": 1000
},
"0": {
"count": 824
},
"3": {
"count": 260
},
"2": {
"count": 102
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/WikipediaRerankingMultilingual | mteb | 2025-05-04T16:12:04Z | 617 | 0 | [
"task_categories:text-ranking",
"annotations_creators:LM-generated and reviewed",
"multilinguality:multilingual",
"language:ben",
"language:bul",
"language:ces",
"language:dan",
"language:deu",
"language:eng",
"language:fas",
"language:fin",
"language:hin",
"language:ita",
"language:nld",
"language:nor",
"language:por",
"language:ron",
"language:srp",
"language:swe",
"license:cc-by-sa-3.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-ranking"
] | 2025-02-19T09:28:42Z | null | ---
annotations_creators:
- LM-generated and reviewed
language:
- ben
- bul
- ces
- dan
- deu
- eng
- fas
- fin
- hin
- ita
- nld
- nor
- por
- ron
- srp
- swe
license: cc-by-sa-3.0
multilinguality: multilingual
task_categories:
- text-ranking
task_ids: []
dataset_info:
- config_name: bg-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 9604308
num_examples: 13500
download_size: 4593991
dataset_size: 9604308
- config_name: bg-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: bg-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 197625
num_examples: 1500
download_size: 96857
dataset_size: 197625
- config_name: bg-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: bn-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 14497846
num_examples: 13500
download_size: 5486517
dataset_size: 14497846
- config_name: bn-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: bn-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 222824
num_examples: 1500
download_size: 95032
dataset_size: 222824
- config_name: bn-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: cs-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 6098076
num_examples: 13500
download_size: 3914545
dataset_size: 6098076
- config_name: cs-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: cs-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 124465
num_examples: 1500
download_size: 82189
dataset_size: 124465
- config_name: cs-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: da-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5309400
num_examples: 13500
download_size: 3172960
dataset_size: 5309400
- config_name: da-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: da-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 118643
num_examples: 1500
download_size: 73789
dataset_size: 118643
- config_name: da-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: de-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 6019751
num_examples: 13500
download_size: 3594010
dataset_size: 6019751
- config_name: de-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: de-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 138167
num_examples: 1500
download_size: 88032
dataset_size: 138167
- config_name: de-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: en-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 6671388
num_examples: 13500
download_size: 3961948
dataset_size: 6671388
- config_name: en-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: en-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 134536
num_examples: 1500
download_size: 83004
dataset_size: 134536
- config_name: en-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: fa-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 8973566
num_examples: 13500
download_size: 4213163
dataset_size: 8973566
- config_name: fa-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: fa-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 167018
num_examples: 1500
download_size: 85233
dataset_size: 167018
- config_name: fa-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: fi-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5866641
num_examples: 13500
download_size: 3485556
dataset_size: 5866641
- config_name: fi-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: fi-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 117859
num_examples: 1500
download_size: 74406
dataset_size: 117859
- config_name: fi-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: hi-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 14696552
num_examples: 13500
download_size: 5583513
dataset_size: 14696552
- config_name: hi-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: hi-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 229970
num_examples: 1500
download_size: 98256
dataset_size: 229970
- config_name: hi-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: it-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5899305
num_examples: 13500
download_size: 3566485
dataset_size: 5899305
- config_name: it-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: it-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 137965
num_examples: 1500
download_size: 84180
dataset_size: 137965
- config_name: it-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: nl-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5628451
num_examples: 13500
download_size: 3254369
dataset_size: 5628451
- config_name: nl-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: nl-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 130098
num_examples: 1500
download_size: 79310
dataset_size: 130098
- config_name: nl-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: no-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5603404
num_examples: 13500
download_size: 3361788
dataset_size: 5603404
- config_name: no-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: no-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 116210
num_examples: 1500
download_size: 72568
dataset_size: 116210
- config_name: no-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: pt-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 6078548
num_examples: 13500
download_size: 3644877
dataset_size: 6078548
- config_name: pt-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: pt-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 132902
num_examples: 1500
download_size: 82274
dataset_size: 132902
- config_name: pt-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: ro-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5487340
num_examples: 13500
download_size: 3314140
dataset_size: 5487340
- config_name: ro-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: ro-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 128853
num_examples: 1500
download_size: 80958
dataset_size: 128853
- config_name: ro-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: sr-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 9362172
num_examples: 13500
download_size: 4727113
dataset_size: 9362172
- config_name: sr-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: sr-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 173594
num_examples: 1500
download_size: 95366
dataset_size: 173594
- config_name: sr-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
- config_name: sv-corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 5727305
num_examples: 13500
download_size: 3383922
dataset_size: 5727305
- config_name: sv-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 778020
num_examples: 13500
download_size: 101652
dataset_size: 778020
- config_name: sv-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 121800
num_examples: 1500
download_size: 77079
dataset_size: 121800
- config_name: sv-top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 468900
num_examples: 1500
download_size: 97689
dataset_size: 468900
configs:
- config_name: bg-corpus
data_files:
- split: test
path: bg-corpus/test-*
- config_name: bg-qrels
data_files:
- split: test
path: bg-qrels/test-*
- config_name: bg-queries
data_files:
- split: test
path: bg-queries/test-*
- config_name: bg-top_ranked
data_files:
- split: test
path: bg-top_ranked/test-*
- config_name: bn-corpus
data_files:
- split: test
path: bn-corpus/test-*
- config_name: bn-qrels
data_files:
- split: test
path: bn-qrels/test-*
- config_name: bn-queries
data_files:
- split: test
path: bn-queries/test-*
- config_name: bn-top_ranked
data_files:
- split: test
path: bn-top_ranked/test-*
- config_name: cs-corpus
data_files:
- split: test
path: cs-corpus/test-*
- config_name: cs-qrels
data_files:
- split: test
path: cs-qrels/test-*
- config_name: cs-queries
data_files:
- split: test
path: cs-queries/test-*
- config_name: cs-top_ranked
data_files:
- split: test
path: cs-top_ranked/test-*
- config_name: da-corpus
data_files:
- split: test
path: da-corpus/test-*
- config_name: da-qrels
data_files:
- split: test
path: da-qrels/test-*
- config_name: da-queries
data_files:
- split: test
path: da-queries/test-*
- config_name: da-top_ranked
data_files:
- split: test
path: da-top_ranked/test-*
- config_name: de-corpus
data_files:
- split: test
path: de-corpus/test-*
- config_name: de-qrels
data_files:
- split: test
path: de-qrels/test-*
- config_name: de-queries
data_files:
- split: test
path: de-queries/test-*
- config_name: de-top_ranked
data_files:
- split: test
path: de-top_ranked/test-*
- config_name: en-corpus
data_files:
- split: test
path: en-corpus/test-*
- config_name: en-qrels
data_files:
- split: test
path: en-qrels/test-*
- config_name: en-queries
data_files:
- split: test
path: en-queries/test-*
- config_name: en-top_ranked
data_files:
- split: test
path: en-top_ranked/test-*
- config_name: fa-corpus
data_files:
- split: test
path: fa-corpus/test-*
- config_name: fa-qrels
data_files:
- split: test
path: fa-qrels/test-*
- config_name: fa-queries
data_files:
- split: test
path: fa-queries/test-*
- config_name: fa-top_ranked
data_files:
- split: test
path: fa-top_ranked/test-*
- config_name: fi-corpus
data_files:
- split: test
path: fi-corpus/test-*
- config_name: fi-qrels
data_files:
- split: test
path: fi-qrels/test-*
- config_name: fi-queries
data_files:
- split: test
path: fi-queries/test-*
- config_name: fi-top_ranked
data_files:
- split: test
path: fi-top_ranked/test-*
- config_name: hi-corpus
data_files:
- split: test
path: hi-corpus/test-*
- config_name: hi-qrels
data_files:
- split: test
path: hi-qrels/test-*
- config_name: hi-queries
data_files:
- split: test
path: hi-queries/test-*
- config_name: hi-top_ranked
data_files:
- split: test
path: hi-top_ranked/test-*
- config_name: it-corpus
data_files:
- split: test
path: it-corpus/test-*
- config_name: it-qrels
data_files:
- split: test
path: it-qrels/test-*
- config_name: it-queries
data_files:
- split: test
path: it-queries/test-*
- config_name: it-top_ranked
data_files:
- split: test
path: it-top_ranked/test-*
- config_name: nl-corpus
data_files:
- split: test
path: nl-corpus/test-*
- config_name: nl-qrels
data_files:
- split: test
path: nl-qrels/test-*
- config_name: nl-queries
data_files:
- split: test
path: nl-queries/test-*
- config_name: nl-top_ranked
data_files:
- split: test
path: nl-top_ranked/test-*
- config_name: no-corpus
data_files:
- split: test
path: no-corpus/test-*
- config_name: no-qrels
data_files:
- split: test
path: no-qrels/test-*
- config_name: no-queries
data_files:
- split: test
path: no-queries/test-*
- config_name: no-top_ranked
data_files:
- split: test
path: no-top_ranked/test-*
- config_name: pt-corpus
data_files:
- split: test
path: pt-corpus/test-*
- config_name: pt-qrels
data_files:
- split: test
path: pt-qrels/test-*
- config_name: pt-queries
data_files:
- split: test
path: pt-queries/test-*
- config_name: pt-top_ranked
data_files:
- split: test
path: pt-top_ranked/test-*
- config_name: ro-corpus
data_files:
- split: test
path: ro-corpus/test-*
- config_name: ro-qrels
data_files:
- split: test
path: ro-qrels/test-*
- config_name: ro-queries
data_files:
- split: test
path: ro-queries/test-*
- config_name: ro-top_ranked
data_files:
- split: test
path: ro-top_ranked/test-*
- config_name: sr-corpus
data_files:
- split: test
path: sr-corpus/test-*
- config_name: sr-qrels
data_files:
- split: test
path: sr-qrels/test-*
- config_name: sr-queries
data_files:
- split: test
path: sr-queries/test-*
- config_name: sr-top_ranked
data_files:
- split: test
path: sr-top_ranked/test-*
- config_name: sv-corpus
data_files:
- split: test
path: sv-corpus/test-*
- config_name: sv-qrels
data_files:
- split: test
path: sv-qrels/test-*
- config_name: sv-queries
data_files:
- split: test
path: sv-queries/test-*
- config_name: sv-top_ranked
data_files:
- split: test
path: sv-top_ranked/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">WikipediaRerankingMultilingual</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
The dataset is derived from Cohere's wikipedia-2023-11 dataset and contains synthetically generated queries.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Encyclopaedic, Written |
| Reference | https://huggingface.co/datasets/ellamind/wikipedia-2023-11-reranking-multilingual |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["WikipediaRerankingMultilingual"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@online{wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("WikipediaRerankingMultilingual")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 240000,
"number_of_characters": 83866932,
"num_documents": 216000,
"min_document_length": 100,
"average_document_length": 381.70714351851854,
"max_document_length": 9461,
"unique_documents": 216000,
"num_queries": 24000,
"min_query_length": 7,
"average_query_length": 59.091208333333334,
"max_query_length": 180,
"unique_queries": 24000,
"none_queries": 0,
"num_relevant_docs": 216000,
"min_relevant_docs_per_query": 9,
"average_relevant_docs_per_query": 1.0,
"max_relevant_docs_per_query": 9,
"unique_relevant_docs": 216000,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": 24000,
"min_top_ranked_per_query": 9,
"average_top_ranked_per_query": 9.0,
"max_top_ranked_per_query": 9
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
nhagar/c4_urls_en.noclean | nhagar | 2025-05-04T16:11:42Z | 356 | 0 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2025-03-09T18:13:45Z | null | ---
dataset_info:
features:
- name: url
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 134877251
num_examples: 1396098
download_size: 100774629
dataset_size: 134877251
configs:
- config_name: default
data_files:
- split: train
path: batch*/train-*
license: odc-by
task_categories:
- text-generation
language:
- en
size_categories:
- 10B<n<100B
---
# Dataset Card for c4_urls_en.noclean
This dataset provides the URLs and top-level domains associated with training records in [allenai/c4](https://huggingface.co/datasets/allenai/c4) (English no clean variant). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible.
## Dataset Details
### Dataset Description
This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy).
- **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy)
- **License:** Same as source dataset
### Dataset Sources
- **Repository:** [allenai/c4](https://huggingface.co/datasets/allenai/c4)
## Uses
This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data.
### Direct Use
The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve:
- Identifying the most-used websites
- Categorizing URLs to understand domain- or topic-level dataset composition
- Comparing URLs across datasets
- Digging into inclusion/exclusion patterns for a particular website
### Out-of-Scope Use
This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset.
## Dataset Structure
This dataset contains every record with a URL from the source dataset. It contains two columns:
- `url`: The raw URL associated with each record
- `domain`: The top-level domain for each URL, extracted with `tldextract`
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed] |
mteb/NQ-PL | mteb | 2025-05-04T16:11:20Z | 16 | 0 | [
"task_categories:text-retrieval",
"multilinguality:translated",
"source_datasets:mteb/nq",
"language:pol",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.19840",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2025-02-05T18:39:22Z | null | ---
language:
- pol
multilinguality: translated
source_datasets:
- mteb/nq
task_categories:
- text-retrieval
task_ids: []
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 1481700950
num_examples: 2681468
download_size: 897798855
dataset_size: 1481700950
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 133323
num_examples: 4201
download_size: 51009
dataset_size: 133323
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 230227
num_examples: 3452
download_size: 157883
dataset_size: 230227
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NQ-PL</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Natural Questions: A Benchmark for Question Answering Research
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | None |
| Reference | https://ai.google.com/research/NaturalQuestions/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["NQ-PL"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{wojtasik2024beirpl,
archiveprefix = {arXiv},
author = {Konrad Wojtasik and Vadim Shishkin and Kacper Wołowiec and Arkadiusz Janz and Maciej Piasecki},
eprint = {2305.19840},
primaryclass = {cs.IR},
title = {BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("NQ-PL")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2684920,
"number_of_characters": 1349328700,
"num_documents": 2681468,
"min_document_length": 5,
"average_document_length": 503.14302128535564,
"max_document_length": 17008,
"unique_documents": 2681468,
"num_queries": 3452,
"min_query_length": 18,
"average_query_length": 48.31662804171495,
"max_query_length": 111,
"unique_queries": 3452,
"none_queries": 0,
"num_relevant_docs": 4201,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.2169756662804172,
"max_relevant_docs_per_query": 4,
"unique_relevant_docs": 4201,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/germanquad-retrieval | mteb | 2025-05-04T16:09:32Z | 67 | 0 | [
"task_categories:text-retrieval",
"task_ids:multiple-choice-qa",
"annotations_creators:human-annotated",
"multilinguality:monolingual",
"language:deu",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2104.12741",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-01-07T20:17:07Z | null | ---
annotations_creators:
- human-annotated
language:
- deu
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- multiple-choice-qa
configs:
- config_name: corpus
data_files:
- split: corpus
path: corpus/data-00000-of-00001.arrow
- config_name: queries
data_files:
- split: queries
path: queries/data-00000-of-00001.arrow
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">GermanQuAD-Retrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Context Retrieval for German Question Answering
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Written, Non-fiction, Web |
| Reference | https://huggingface.co/datasets/deepset/germanquad |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["GermanQuAD-Retrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{möller2021germanquad,
archiveprefix = {arXiv},
author = {Timo Möller and Julian Risch and Malte Pietsch},
eprint = {2104.12741},
primaryclass = {cs.CL},
title = {GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("GermanQuAD-Retrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2678,
"number_of_characters": 1045149,
"num_documents": 474,
"min_document_length": 507,
"average_document_length": 1941.090717299578,
"max_document_length": 11647,
"unique_documents": 474,
"num_queries": 2204,
"min_query_length": 15,
"average_query_length": 56.74773139745916,
"max_query_length": 130,
"unique_queries": 2204,
"none_queries": 0,
"num_relevant_docs": 2204,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.0,
"max_relevant_docs_per_query": 1,
"unique_relevant_docs": 474,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/XNLIV2 | mteb | 2025-05-04T16:09:29Z | 351 | 0 | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-annotated",
"multilinguality:translated",
"language:asm",
"language:ben",
"language:bho",
"language:ell",
"language:guj",
"language:kan",
"language:mar",
"language:ory",
"language:pan",
"language:rus",
"language:san",
"language:tam",
"language:tur",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2301.06527",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-11-30T15:15:38Z | null | ---
annotations_creators:
- expert-annotated
language:
- asm
- ben
- bho
- ell
- guj
- kan
- mar
- ory
- pan
- rus
- san
- tam
- tur
license: unknown
multilinguality: translated
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
tags:
- mteb
- text
dataset_info:
- config_name: assamese
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 565556
num_examples: 1365
download_size: 230705
dataset_size: 565556
- config_name: bengali
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 567227
num_examples: 1365
download_size: 223053
dataset_size: 567227
- config_name: bhojpuri
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 549145
num_examples: 1365
download_size: 220031
dataset_size: 549145
- config_name: greek
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 446843
num_examples: 1365
download_size: 224614
dataset_size: 446843
- config_name: gujrati
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 550823
num_examples: 1365
download_size: 224504
dataset_size: 550823
- config_name: kannada
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 622208
num_examples: 1365
download_size: 239158
dataset_size: 622208
- config_name: marathi
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 569028
num_examples: 1365
download_size: 225578
dataset_size: 569028
- config_name: odiya
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 571151
num_examples: 1365
download_size: 228006
dataset_size: 571151
- config_name: punjabi
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 565812
num_examples: 1365
download_size: 224326
dataset_size: 565812
- config_name: russian
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 418863
num_examples: 1365
download_size: 213532
dataset_size: 418863
- config_name: sanskrit
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 598335
num_examples: 1365
download_size: 235984
dataset_size: 598335
- config_name: tamil
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 676943
num_examples: 1365
download_size: 245022
dataset_size: 676943
- config_name: turkish
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype: int64
splits:
- name: test
num_bytes: 246707
num_examples: 1365
download_size: 156292
dataset_size: 246707
configs:
- config_name: assamese
data_files:
- split: test
path: assamese/test-*
- config_name: bengali
data_files:
- split: test
path: bengali/test-*
- config_name: bhojpuri
data_files:
- split: test
path: bhojpuri/test-*
- config_name: greek
data_files:
- split: test
path: greek/test-*
- config_name: gujrati
data_files:
- split: test
path: gujrati/test-*
- config_name: kannada
data_files:
- split: test
path: kannada/test-*
- config_name: marathi
data_files:
- split: test
path: marathi/test-*
- config_name: odiya
data_files:
- split: test
path: odiya/test-*
- config_name: punjabi
data_files:
- split: test
path: punjabi/test-*
- config_name: russian
data_files:
- split: test
path: russian/test-*
- config_name: sanskrit
data_files:
- split: test
path: sanskrit/test-*
- config_name: tamil
data_files:
- split: test
path: tamil/test-*
- config_name: turkish
data_files:
- split: test
path: turkish/test-*
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">XNLIV2</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
This is subset of 'XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding' with languages that were not part of the original XNLI plus three (verified) languages that are not strongly covered in MTEB
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Non-fiction, Fiction, Government, Written |
| Reference | https://arxiv.org/pdf/2301.06527 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["XNLIV2"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{upadhyay2023xnli,
author = {Upadhyay, Ankit Kumar and Upadhya, Harsit Kumar},
booktitle = {2023 IEEE 8th International Conference for Convergence in Technology (I2CT)},
organization = {IEEE},
pages = {1--6},
title = {XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding (XLU)},
year = {2023},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("XNLIV2")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 17745,
"number_of_characters": 2778287,
"unique_pairs": 17745,
"min_sentence1_length": 5,
"avg_sentence1_length": 105.99329388560157,
"max_sentence1_length": 339,
"unique_sentence1": 14234,
"min_sentence2_length": 8,
"avg_sentence2_length": 50.57402085094393,
"max_sentence2_length": 162,
"unique_sentence2": 17745,
"unique_labels": 2,
"labels": {
"0": {
"count": 8879
},
"1": {
"count": 8866
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/DutchBookReviewSentimentClassification | mteb | 2025-05-04T16:08:19Z | 10 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:nld",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1910.00896",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-12-21T12:08:33Z | null | ---
annotations_creators:
- derived
language:
- nld
license: cc-by-nc-sa-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 29496321
num_examples: 20028
- name: test
num_bytes: 3246239
num_examples: 2224
- name: unsupervised
num_bytes: 152732991
num_examples: 96264
download_size: 116070515
dataset_size: 185475551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: unsupervised
path: data/unsupervised-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">DutchBookReviewSentimentClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
A Dutch book review for sentiment classification.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Reviews, Written |
| Reference | https://github.com/benjaminvdb/DBRD |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["DutchBookReviewSentimentClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{DBLP:journals/corr/abs-1910-00896,
archiveprefix = {arXiv},
author = {Benjamin, van der Burgh and
Suzan, Verberne},
bibsource = {dblp computer science bibliography, https://dblp.org},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-00896.bib},
eprint = {1910.00896},
journal = {CoRR},
timestamp = {Fri, 04 Oct 2019 12:28:06 +0200},
title = {The merits of Universal Language Model Fine-tuning for Small Datasets
- a case with Dutch book reviews},
url = {http://arxiv.org/abs/1910.00896},
volume = {abs/1910.00896},
year = {2019},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("DutchBookReviewSentimentClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2224,
"number_of_characters": 3209177,
"number_texts_intersect_with_train": 0,
"min_text_length": 4,
"average_text_length": 1442.9752697841727,
"max_text_length": 11140,
"unique_text": 2224,
"unique_labels": 2,
"labels": {
"1": {
"count": 1112
},
"0": {
"count": 1112
}
}
},
"train": {
"num_samples": 20028,
"number_of_characters": 29162515,
"number_texts_intersect_with_train": null,
"min_text_length": 4,
"average_text_length": 1456.0872278809666,
"max_text_length": 22676,
"unique_text": 20028,
"unique_labels": 2,
"labels": {
"1": {
"count": 10014
},
"0": {
"count": 10014
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/HotelReviewSentimentClassification | mteb | 2025-05-04T16:07:41Z | 11 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:ara",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-12-21T10:42:59Z | null | ---
annotations_creators:
- derived
language:
- ara
license: unknown
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 536128
num_examples: 2048
download_size: 274359
dataset_size: 536128
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">HotelReviewSentimentClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
HARD is a dataset of Arabic hotel reviews collected from the Booking.com website.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Reviews, Written |
| Reference | https://link.springer.com/chapter/10.1007/978-3-319-67056-0_3 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["HotelReviewSentimentClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{elnagar2018hotel,
author = {Elnagar, Ashraf and Khalifa, Yasmin S and Einea, Anas},
journal = {Intelligent natural language processing: Trends and applications},
pages = {35--52},
publisher = {Springer},
title = {Hotel Arabic-reviews dataset construction for sentiment analysis applications},
year = {2018},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("HotelReviewSentimentClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"train": {
"num_samples": 2048,
"number_of_characters": 282368,
"number_texts_intersect_with_train": null,
"min_text_length": 11,
"average_text_length": 137.875,
"max_text_length": 2698,
"unique_text": 2044,
"unique_labels": 4,
"labels": {
"4": {
"count": 512
},
"3": {
"count": 512
},
"0": {
"count": 279
},
"1": {
"count": 745
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
rainbowbridge/x_dataset_62648 | rainbowbridge | 2025-05-04T16:04:40Z | 953 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T08:03:15Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** rainbowbridge/x_dataset_62648
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5GmwRWW178RdVKEfua2F8uM7JGj6ctCN8fXwiq1aY9mYUJsB
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{rainbowbridge2025datauniversex_dataset_62648,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={rainbowbridge},
year={2025},
url={https://huggingface.co/datasets/rainbowbridge/x_dataset_62648},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 48799192
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-08T00:00:00Z
- **Last Updated:** 2025-02-13T20:22:55Z
### Data Distribution
- Tweets with hashtags: 46.38%
- Tweets without hashtags: 53.62%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 26164065 | 53.62% |
| 2 | #riyadh | 360980 | 0.74% |
| 3 | #zelena | 259485 | 0.53% |
| 4 | #tiktok | 220229 | 0.45% |
| 5 | #ad | 130307 | 0.27% |
| 6 | #bbb25 | 128842 | 0.26% |
| 7 | #jhope_at_galadespiècesjaunes | 111027 | 0.23% |
| 8 | #bbmzansi | 71199 | 0.15% |
| 9 | #pr | 69792 | 0.14% |
| 10 | #yahooニュース | 67778 | 0.14% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T08:03:45Z | 1599427 | 1599427 |
| 2025-01-30T20:06:41Z | 9852425 | 11451852 |
| 2025-02-03T08:09:26Z | 7943066 | 19394918 |
| 2025-02-06T20:15:04Z | 11440012 | 30834930 |
| 2025-02-10T08:19:07Z | 9278875 | 40113805 |
| 2025-02-13T20:22:55Z | 8685387 | 48799192 |
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_0_for_gen_16_v2 | HungVu2003 | 2025-05-04T15:51:47Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T15:51:45Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1998629
num_examples: 13750
download_size: 1093688
dataset_size: 1998629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nhagar/fineweb-2_urls | nhagar | 2025-05-04T15:50:27Z | 67 | 0 | [
"task_categories:text-generation",
"license:odc-by",
"size_categories:10B<n<100B",
"region:us"
] | [
"text-generation"
] | 2025-04-23T19:09:53Z | null | ---
license: odc-by
task_categories:
- text-generation
size_categories:
- 10B<n<100B
---
# Dataset Card for fineweb-2_urls
This dataset provides the URLs and top-level domains associated with training records in [HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible.
## Dataset Details
### Dataset Description
This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy).
- **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy)
- **License:** Same as source dataset
### Dataset Sources
- **Repository:** [HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
## Uses
This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data.
### Direct Use
The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve:
- Identifying the most-used websites
- Categorizing URLs to understand domain- or topic-level dataset composition
- Comparing URLs across datasets
- Digging into inclusion/exclusion patterns for a particular website
### Out-of-Scope Use
This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset.
## Dataset Structure
This dataset contains every record with a URL from the source dataset. It contains two columns:
- `url`: The raw URL associated with each record
- `domain`: The top-level domain for each URL, extracted with `tldextract`
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed] |
zhengbang0707/Llama3.1-8B-IT_M-DPO_v2_30k | zhengbang0707 | 2025-05-04T13:48:49Z | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T02:57:25Z | null | ---
dataset_info:
features:
- name: trajectory
list:
- name: content
dtype: string
- name: role
dtype: string
- name: trajectory_reward
sequence: float64
splits:
- name: train
num_bytes: 11791459
num_examples: 500
download_size: 3416106
dataset_size: 11791459
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rainbowbridge/x_dataset_55757 | rainbowbridge | 2025-05-04T13:11:20Z | 884 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-29T00:12:22Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** rainbowbridge/x_dataset_55757
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5DMFuv1TnSV1kvrVpcTZShpj1cSjUAdCLmvtEecDPP6mi9dp
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{rainbowbridge2025datauniversex_dataset_55757,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={rainbowbridge},
year={2025},
url={https://huggingface.co/datasets/rainbowbridge/x_dataset_55757},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 37470244
- **Date Range:** 2025-01-22T00:00:00Z to 2025-02-13T00:00:00Z
- **Last Updated:** 2025-02-18T20:41:53Z
### Data Distribution
- Tweets with hashtags: 39.63%
- Tweets without hashtags: 60.37%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 22620167 | 60.37% |
| 2 | #riyadh | 268021 | 0.72% |
| 3 | #zelena | 201584 | 0.54% |
| 4 | #tiktok | 145532 | 0.39% |
| 5 | #bbb25 | 90347 | 0.24% |
| 6 | #ad | 86092 | 0.23% |
| 7 | #jhope_at_galadespiècesjaunes | 85923 | 0.23% |
| 8 | #transferlerlebirliktezafere | 79350 | 0.21% |
| 9 | #theheartkillersep10 | 55726 | 0.15% |
| 10 | #grammys | 51671 | 0.14% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-29T00:13:22Z | 2565348 | 2565348 |
| 2025-02-01T12:16:08Z | 8200262 | 10765610 |
| 2025-02-05T00:18:46Z | 7053334 | 17818944 |
| 2025-02-08T12:22:11Z | 8374018 | 26192962 |
| 2025-02-12T00:28:20Z | 9601849 | 35794811 |
| 2025-02-18T05:40:52Z | 855725 | 36650536 |
| 2025-02-18T20:41:53Z | 819708 | 37470244 |
|
Kallia/stock-news-summaries | Kallia | 2025-05-04T10:27:28Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T10:27:25Z | null | ---
dataset_info:
features:
- name: article
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 7220091
num_examples: 2680
download_size: 4340695
dataset_size: 7220091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AprilBoy/interview-assesment-questions | AprilBoy | 2025-05-04T09:10:09Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T08:36:56Z | null | ---
license: apache-2.0
---
|
taetae030/fin-term-instruct | taetae030 | 2025-05-04T08:12:41Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"korean",
"finance",
"chatbot",
"instruction",
"question-answering"
] | [] | 2025-05-04T05:49:38Z | null | ---
license: apache-2.0
tags:
- korean
- finance
- chatbot
- instruction
- question-answering
---
# 📘 fin-term-instruct: 한국어 금융 용어 질의응답 데이터셋
`fin-term-instruct`는 **한국어 금융 용어 설명에 특화된 instruct-style 질문-응답 데이터셋**입니다.
Meta의 LLaMA 시리즈 등 대형 언어모델(LLM)을 한국어 금융 챗봇으로 튜닝하기 위해 구축되었습니다.
---
## 📦 원본 출처: AI 허브
이 데이터는 AI 허브의 **"금융·법률 문서 기계독해 데이터"**를 기반으로 구축하였습니다.
- 📂 원본 주소: [AI 허브 – 금융·법률 문서 기계독해 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=71610)
- **데이터 구축년도**: 2022년
- **총 구축량**: 400,000건
- **데이터 형식**: JSON (지문 - 질문 - 정답 구성)
### 🔍 사용 범위
- 전체 데이터 중 **`금융경제` 분야(약 17.3%)**만 선별하여 사용
- 기존 MRC 형태에서 **instruction-style QA 포맷**으로 재가공
- GPT 기반 요약·정제를 통해 간결한 응답 형식으로 통일
### 🏛 출처 기관
- 한국은행
- 금융위원회
- 금융감독원
- 국회입법조사처
- 법제처
- 한국금융연구원 등
---
## 📑 데이터 구조
Alpaca-style instruction 포맷 사용:
- `instruction`: 질문 (자연어 문장)
- `input`: 문맥 (본 데이터셋에서는 생략됨)
- `output`: 질문에 대한 짧고 정확한 답변
### 📋 예시
| instruction | input | output |
|------------------------------------------------------------------------|-------|---------------------------|
| 한국은행이 업무 추진 과정에서 생길 수 있는 리스크 예방을 위해 해마다 실시하는 게 뭐야 | | 리스크 통제 자가진단 |
| 데이터 사이언스에 대한 프로그램을 보강하여 2021년에 연수를 진행한 기관은 어디야 | | 한국은행 |
| 디지털 경제 시대의 데이터 관리와 이용을 위해 만든 제도는 뭐야 | | 데이터 거버넌스 규정 |
### 📂 JSON 샘플
```json
{
"instruction": "디지털 혁신을 위한 경제전망 시스템을 만들 때 이용한 기술은 뭐야",
"input": "",
"output": "인공지능"
}
|
yoihibino/so100_complete | yoihibino | 2025-05-04T07:17:33Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T07:14:47Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 4921,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": null,
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.cam1": {
"dtype": "image",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": null
},
"observation.images.cam2": {
"dtype": "image",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
juni3227/so100_test06 | juni3227 | 2025-05-04T06:33:49Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T06:33:38Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 2735,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.airial": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.right_follower": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
dgambettaphd/D_llm2_gen4_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | 2025-05-04T05:35:40Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T05:35:36Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 11487388
num_examples: 20000
download_size: 6936359
dataset_size: 11487388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
flyingbugs/OpenR1-Math-220k-pruned-keep-0.75-end-start-0.0 | flyingbugs | 2025-05-04T05:15:27Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T05:14:21Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4693668410
num_examples: 93733
download_size: 2033374084
dataset_size: 4693668410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_1_for_gen_5_v2 | HungVu2003 | 2025-05-04T04:33:17Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T04:33:16Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2292678
num_examples: 13750
download_size: 1031494
dataset_size: 2292678
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_1_for_gen_16 | HungVu2003 | 2025-05-04T03:47:06Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T03:47:05Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2447771
num_examples: 12500
download_size: 1301121
dataset_size: 2447771
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/oasst1-filtered | ma921 | 2025-05-04T03:04:05Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T03:04:03Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 36308251.51736613
num_examples: 16419
- name: test
num_bytes: 1922678.4789915967
num_examples: 872
download_size: 18327886
dataset_size: 38230929.99635773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
marcuscedricridia/OpenCodeInstruct-1000-sample | marcuscedricridia | 2025-05-04T02:55:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T02:55:11Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1871144.295
num_examples: 1000
download_size: 823543
dataset_size: 1871144.295
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zerostratos/music_test | zerostratos | 2025-05-04T02:00:04Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:arrow",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-04T01:55:13Z | null | ---
license: apache-2.0
---
|
mlfoundations-dev/mix_avg_domain | mlfoundations-dev | 2025-05-04T01:58:57Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T01:54:40Z | null | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: domain
dtype: string
- name: r1_response
dtype: string
- name: r1_reasoning_content
dtype: string
- name: extract_solution
dtype: string
- name: url
dtype: string
- name: filename
dtype: string
- name: success
dtype: bool
- name: page_count
dtype: int64
- name: page_number
dtype: int64
- name: question_choices_solutions
dtype: string
- name: extracted_question
dtype: string
- name: extracted_answer_choices
sequence: string
- name: matched_solution
dtype: string
- name: qa_validation_outputs
dtype: bool
- name: classifier_reasoning
dtype: string
- name: is_organic_chemistry
dtype: bool
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: difficulty_reasoning
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: response_seed
dtype: string
splits:
- name: train
num_bytes: 12328252550.0
num_examples: 94797
download_size: 5254951315
dataset_size: 12328252550.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ParkSY/data_nerf_moreconcept_4styles_randomsample_anything_depthmap_normalmap | ParkSY | 2025-05-04T01:28:45Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T01:28:41Z | null | ---
dataset_info:
features:
- name: input_image
dtype: string
- name: edit_prompt
dtype: string
- name: edited_image
dtype: string
- name: label
dtype: int64
- name: depthmap
dtype: string
- name: normal_map
dtype: string
splits:
- name: train
num_bytes: 220509
num_examples: 438
download_size: 21321
dataset_size: 220509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_1_for_gen_3_v2 | HungVu2003 | 2025-05-04T00:23:27Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T00:23:16Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2007819
num_examples: 13750
download_size: 1031137
dataset_size: 2007819
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gabrielbo/mmlu-pro-verifiers-specific-choice | gabrielbo | 2025-05-03T23:36:23Z | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T07:41:53Z | null | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question_index
dtype: int64
- name: question
dtype: string
- name: options
dtype: string
- name: category
dtype: string
- name: correct_answer
dtype: string
- name: target_option_letter
dtype: string
- name: samples
sequence: string
splits:
- name: train
num_bytes: 979529
num_examples: 106
download_size: 287899
dataset_size: 979529
---
|
Asap7772/omnimath-hint-generator-deepscaler-qwen3-8b-filtered-base-1k | Asap7772 | 2025-05-03T22:49:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T22:49:05Z | null | ---
dataset_info:
features:
- name: domain
sequence: string
- name: difficulty
dtype: float64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: note1
dtype: string
- name: note2
dtype: string
- name: note3
dtype: string
- name: note4
dtype: string
- name: note5
dtype: string
- name: all_hints
dtype: string
splits:
- name: train
num_bytes: 31855009
num_examples: 4428
download_size: 16669680
dataset_size: 31855009
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/single_goal_alpacaeval_repeat_5_turns | VGraf | 2025-05-03T22:21:10Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T22:21:08Z | null | ---
dataset_info:
features:
- name: conv
list:
- name: user
dtype: string
- name: sys
dtype: string
- name: id
dtype: string
- name: do_inference
dtype: bool
- name: inst
dtype: string
- name: key
dtype: int64
- name: prompt
dtype: string
- name: instruction_id_list
sequence: 'null'
- name: kwargs
sequence: 'null'
- name: id
dtype: int64
splits:
- name: train
num_bytes: 3820518
num_examples: 805
download_size: 1291948
dataset_size: 3820518
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/llp-gold-37m-1.5m_clip0.99_T2048.0_I2048 | kothasuhas | 2025-05-03T22:11:22Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T22:09:30Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
- name: num_tokens
dtype: float32
- name: log_weight
dtype: float64
splits:
- name: train
num_bytes: 3605804917.0
num_examples: 1500000
download_size: 1563384486
dataset_size: 3605804917.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_sol_iter0_att10_sol5_lr1e5_3ep_topp0.9 | cchoi1 | 2025-05-03T21:25:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T21:25:01Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
splits:
- name: train
num_bytes: 34375838
num_examples: 2924
download_size: 6803636
dataset_size: 34375838
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TylerS00/docstring-chat-ds | TylerS00 | 2025-05-03T21:00:29Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T21:00:05Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 28919181
num_examples: 15414
download_size: 7775875
dataset_size: 28919181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
YYT-t/rs-math_math-Mistral-7B-Instruct-v0.2-iter_sample_7500_temp_1.0_gen_30_mlr5e-5 | YYT-t | 2025-05-03T20:25:31Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:25:30Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: rational_answer
dtype: string
splits:
- name: train
num_bytes: 3918
num_examples: 3
download_size: 9562
dataset_size: 3918
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_1_for_gen_9_v2 | HungVu2003 | 2025-05-03T20:15:56Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:15:55Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6615412
num_examples: 12500
download_size: 3365999
dataset_size: 6615412
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_0_for_gen_6_v2 | HungVu2003 | 2025-05-03T20:09:58Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:09:56Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1151427
num_examples: 12500
download_size: 701524
dataset_size: 1151427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/banking77 | mteb | 2025-05-03T20:01:44Z | 6,306 | 3 | [
"task_categories:text-classification",
"annotations_creators:human-annotated",
"multilinguality:monolingual",
"language:eng",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2003.04807",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2022-05-17T12:14:06Z | null | ---
annotations_creators:
- human-annotated
language:
- eng
license: mit
multilinguality: monolingual
task_categories:
- text-classification
task_ids: []
tags:
- mteb
- text
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 715028
num_examples: 10003
- name: test
num_bytes: 204010
num_examples: 3080
download_size: 379134
dataset_size: 919038
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">Banking77Classification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Dataset composed of online banking queries annotated with their corresponding intents.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Written |
| Reference | https://arxiv.org/abs/2003.04807 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["Banking77Classification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{casanueva-etal-2020-efficient,
address = {Online},
author = {Casanueva, I{\~n}igo and
Tem{\v{c}}inas, Tadas and
Gerz, Daniela and
Henderson, Matthew and
Vuli{\'c}, Ivan},
booktitle = {Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI},
doi = {10.18653/v1/2020.nlp4convai-1.5},
editor = {Wen, Tsung-Hsien and
Celikyilmaz, Asli and
Yu, Zhou and
Papangelis, Alexandros and
Eric, Mihail and
Kumar, Anuj and
Casanueva, I{\~n}igo and
Shah, Rushin},
month = jul,
pages = {38--45},
publisher = {Association for Computational Linguistics},
title = {Efficient Intent Detection with Dual Sentence Encoders},
url = {https://aclanthology.org/2020.nlp4convai-1.5},
year = {2020},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("Banking77Classification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 3080,
"number_of_characters": 167036,
"number_texts_intersect_with_train": 0,
"min_text_length": 13,
"average_text_length": 54.23246753246753,
"max_text_length": 368,
"unique_text": 3080,
"unique_labels": 77,
"labels": {
"11": {
"count": 40
},
"13": {
"count": 40
},
"32": {
"count": 40
},
"17": {
"count": 40
},
"34": {
"count": 40
},
"46": {
"count": 40
},
"36": {
"count": 40
},
"12": {
"count": 40
},
"4": {
"count": 40
},
"14": {
"count": 40
},
"33": {
"count": 40
},
"41": {
"count": 40
},
"1": {
"count": 40
},
"49": {
"count": 40
},
"23": {
"count": 40
},
"56": {
"count": 40
},
"47": {
"count": 40
},
"8": {
"count": 40
},
"60": {
"count": 40
},
"75": {
"count": 40
},
"15": {
"count": 40
},
"66": {
"count": 40
},
"54": {
"count": 40
},
"40": {
"count": 40
},
"10": {
"count": 40
},
"61": {
"count": 40
},
"6": {
"count": 40
},
"16": {
"count": 40
},
"30": {
"count": 40
},
"74": {
"count": 40
},
"68": {
"count": 40
},
"38": {
"count": 40
},
"73": {
"count": 40
},
"62": {
"count": 40
},
"29": {
"count": 40
},
"22": {
"count": 40
},
"3": {
"count": 40
},
"28": {
"count": 40
},
"44": {
"count": 40
},
"26": {
"count": 40
},
"45": {
"count": 40
},
"42": {
"count": 40
},
"52": {
"count": 40
},
"27": {
"count": 40
},
"51": {
"count": 40
},
"25": {
"count": 40
},
"48": {
"count": 40
},
"55": {
"count": 40
},
"18": {
"count": 40
},
"63": {
"count": 40
},
"70": {
"count": 40
},
"67": {
"count": 40
},
"53": {
"count": 40
},
"21": {
"count": 40
},
"7": {
"count": 40
},
"64": {
"count": 40
},
"50": {
"count": 40
},
"35": {
"count": 40
},
"65": {
"count": 40
},
"71": {
"count": 40
},
"39": {
"count": 40
},
"58": {
"count": 40
},
"43": {
"count": 40
},
"72": {
"count": 40
},
"76": {
"count": 40
},
"37": {
"count": 40
},
"59": {
"count": 40
},
"5": {
"count": 40
},
"20": {
"count": 40
},
"31": {
"count": 40
},
"57": {
"count": 40
},
"0": {
"count": 40
},
"19": {
"count": 40
},
"9": {
"count": 40
},
"2": {
"count": 40
},
"69": {
"count": 40
},
"24": {
"count": 40
}
}
},
"train": {
"num_samples": 10003,
"number_of_characters": 594916,
"number_texts_intersect_with_train": null,
"min_text_length": 13,
"average_text_length": 59.47375787263821,
"max_text_length": 433,
"unique_text": 10003,
"unique_labels": 77,
"labels": {
"11": {
"count": 153
},
"13": {
"count": 139
},
"32": {
"count": 112
},
"17": {
"count": 167
},
"34": {
"count": 166
},
"46": {
"count": 143
},
"36": {
"count": 126
},
"12": {
"count": 112
},
"4": {
"count": 127
},
"14": {
"count": 112
},
"33": {
"count": 118
},
"41": {
"count": 82
},
"1": {
"count": 110
},
"49": {
"count": 115
},
"23": {
"count": 35
},
"56": {
"count": 111
},
"47": {
"count": 149
},
"8": {
"count": 157
},
"60": {
"count": 97
},
"75": {
"count": 180
},
"15": {
"count": 187
},
"66": {
"count": 171
},
"54": {
"count": 129
},
"40": {
"count": 98
},
"10": {
"count": 59
},
"61": {
"count": 146
},
"6": {
"count": 181
},
"16": {
"count": 168
},
"30": {
"count": 121
},
"74": {
"count": 121
},
"68": {
"count": 102
},
"38": {
"count": 106
},
"73": {
"count": 135
},
"62": {
"count": 103
},
"29": {
"count": 121
},
"22": {
"count": 86
},
"3": {
"count": 87
},
"28": {
"count": 182
},
"44": {
"count": 105
},
"26": {
"count": 173
},
"45": {
"count": 159
},
"42": {
"count": 121
},
"52": {
"count": 169
},
"27": {
"count": 133
},
"51": {
"count": 162
},
"25": {
"count": 153
},
"48": {
"count": 148
},
"55": {
"count": 108
},
"18": {
"count": 61
},
"63": {
"count": 175
},
"70": {
"count": 113
},
"67": {
"count": 128
},
"53": {
"count": 161
},
"21": {
"count": 122
},
"7": {
"count": 156
},
"64": {
"count": 172
},
"50": {
"count": 95
},
"35": {
"count": 137
},
"65": {
"count": 113
},
"71": {
"count": 126
},
"39": {
"count": 129
},
"58": {
"count": 114
},
"43": {
"count": 120
},
"72": {
"count": 41
},
"76": {
"count": 163
},
"37": {
"count": 97
},
"59": {
"count": 145
},
"5": {
"count": 171
},
"20": {
"count": 160
},
"31": {
"count": 121
},
"57": {
"count": 114
},
"0": {
"count": 159
},
"19": {
"count": 177
},
"9": {
"count": 129
},
"2": {
"count": 126
},
"69": {
"count": 104
},
"24": {
"count": 129
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
Samarth0710/neurips-2024-peer-reviews | Samarth0710 | 2025-05-03T18:22:43Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T18:14:28Z | null | ---
dataset_info:
features:
- name: paper_id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: pdf_url
dtype: string
- name: reviews
list:
- name: confidence
dtype: int64
- name: rating
dtype: int64
- name: review_id
dtype: string
- name: review_text
dtype: string
splits:
- name: train
num_bytes: 50663353
num_examples: 4236
download_size: 26840387
dataset_size: 50663353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_1_for_gen_16_v2 | HungVu2003 | 2025-05-03T18:10:38Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T18:10:36Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 819598
num_examples: 12500
download_size: 566940
dataset_size: 819598
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_1_for_gen_15_v2 | HungVu2003 | 2025-05-03T18:05:18Z | 0 | 0 | [
"size_categories:10K<n<100K",
"modality:text",
"region:us"
] | [] | 2025-05-03T18:05:17Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 806306
num_examples: 12500
download_size: 555290
dataset_size: 806306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_0_for_gen_2_v2 | HungVu2003 | 2025-05-03T16:56:08Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T16:56:05Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 818212
num_examples: 12500
download_size: 565767
dataset_size: 818212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AleNunezArroyo/spacecraft-dataset | AleNunezArroyo | 2025-05-03T16:53:31Z | 81 | 0 | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2106.08186",
"region:us",
"space",
"spacecraft"
] | [
"image-classification"
] | 2025-04-25T14:03:12Z | null | ---
task_categories:
- image-classification
tags:
- space
- spacecraft
size_categories:
- 1K<n<10K
---
🛰️ Dataset: A Spacecraft Dataset for Detection, Segmentation and Parts Recognition
This dataset is an adaptation of the work “A Spacecraft Dataset for Detection, Segmentation and Parts Recognition,” using the same data but reformatted for use on HuggingFace. I am not the original author; the reference to the original work is provided below. The only modification is that the test set was created by sampling from the original validation set.
In the following repository, you will find a basic YOLO training setup, along with data visualization and conversion to YOLO-compatible format.
- [YOLO Training Repository on GitHub](https://github.com/AleNunezArroyo/spacecraft-detection)
- 📘 [Paper: _A Spacecraft Dataset for Detection, Segmentation and Parts Recognition_](https://arxiv.org/pdf/2106.08186)
- 💾 [Official repository on GitHub](https://github.com/Yurushia1998/SatelliteDataset)
```bash
@misc{hoang2021spacecraftdatasetdetectionsegmentation,
title={A Spacecraft Dataset for Detection, Segmentation and Parts Recognition},
author={Dung Anh Hoang and Bo Chen and Tat-Jun Chin},
year={2021},
eprint={2106.08186},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2106.08186},
}
``` |
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_0_for_gen_13 | HungVu2003 | 2025-05-03T16:41:46Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T16:41:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7304720
num_examples: 12500
download_size: 1973081
dataset_size: 7304720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
littleGuagua/x_dataset_31933 | littleGuagua | 2025-05-03T16:40:50Z | 1,275 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T13:49:43Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_31933
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Fv7zb16pPjz3PRat6fhJytGWW53dLFtQUQvnfaX7bpR5YEy
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_31933,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_31933},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 45517419
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-12T00:00:00Z
- **Last Updated:** 2025-02-18T21:16:27Z
### Data Distribution
- Tweets with hashtags: 41.23%
- Tweets without hashtags: 58.77%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 26750172 | 58.77% |
| 2 | #riyadh | 324609 | 0.71% |
| 3 | #zelena | 201974 | 0.44% |
| 4 | #tiktok | 178927 | 0.39% |
| 5 | #ad | 105436 | 0.23% |
| 6 | #bbb25 | 82587 | 0.18% |
| 7 | #jhope_at_galadespiècesjaunes | 70006 | 0.15% |
| 8 | #bbmzansi | 62802 | 0.14% |
| 9 | #trump | 57439 | 0.13% |
| 10 | #pr | 56224 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T13:50:33Z | 2863904 | 2863904 |
| 2025-01-30T01:54:18Z | 10112334 | 12976238 |
| 2025-02-02T13:57:30Z | 9473545 | 22449783 |
| 2025-02-06T02:00:48Z | 7706558 | 30156341 |
| 2025-02-09T14:03:35Z | 5776384 | 35932725 |
| 2025-02-13T02:27:40Z | 8126791 | 44059516 |
| 2025-02-18T06:15:19Z | 648961 | 44708477 |
| 2025-02-18T21:16:27Z | 808942 | 45517419 |
|
littleGuagua/x_dataset_11627 | littleGuagua | 2025-05-03T16:37:25Z | 1,557 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T13:13:48Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_11627
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5FUByNzgdM2eukk6SwetFsZ4EPTxRqaV4YNEhNcusS1SxRVX
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_11627,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_11627},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 149000631
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-10T00:00:00Z
- **Last Updated:** 2025-02-18T20:47:28Z
### Data Distribution
- Tweets with hashtags: 42.62%
- Tweets without hashtags: 57.38%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 85489881 | 57.38% |
| 2 | #riyadh | 1033096 | 0.69% |
| 3 | #zelena | 790108 | 0.53% |
| 4 | #tiktok | 618215 | 0.41% |
| 5 | #bbb25 | 362232 | 0.24% |
| 6 | #ad | 356819 | 0.24% |
| 7 | #jhope_at_galadespiècesjaunes | 234343 | 0.16% |
| 8 | #bbmzansi | 207541 | 0.14% |
| 9 | #pr | 188395 | 0.13% |
| 10 | #yahooニュース | 178958 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T13:14:32Z | 2274090 | 2274090 |
| 2025-01-30T01:26:02Z | 29523249 | 31797339 |
| 2025-02-02T13:36:10Z | 29333848 | 61131187 |
| 2025-02-06T01:47:05Z | 28740147 | 89871334 |
| 2025-02-09T14:00:59Z | 29293177 | 119164511 |
| 2025-02-13T02:15:32Z | 28379764 | 147544275 |
| 2025-02-18T05:45:25Z | 808939 | 148353214 |
| 2025-02-18T20:47:28Z | 647417 | 149000631 |
|
FrancophonIA/Lexique_ZLEA | FrancophonIA | 2025-05-03T16:34:27Z | 0 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-05-03T16:33:27Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://publications.gc.ca/site/eng/9.800970/publication.html |
amekerishvili/ATCO2_full_files | amekerishvili | 2025-05-03T16:14:01Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T12:01:28Z | null | ---
dataset_info:
features:
- name: ID
dtype: string
- name: audio_file
dtype: string
- name: start_time
dtype: float64
- name: end_time
dtype: float64
- name: airport
dtype: string
- name: channel
dtype: string
- name: frequency
dtype: string
- name: time
dtype: string
- name: waypoints
dtype: string
- name: callsigns
dtype: string
- name: ground_truth_raw
dtype: string
- name: ground_truth
dtype: string
- name: non_Eng_ground_truth
dtype: string
- name: tags
dtype: string
- name: values_tags
dtype: string
- name: commands_tags
dtype: string
- name: callsigns_tags
dtype: string
- name: unnamed_tags
dtype: string
splits:
- name: train
num_bytes: 1558206
num_examples: 612
- name: validation
num_bytes: 362174
num_examples: 136
- name: test
num_bytes: 356108
num_examples: 129
download_size: 397317
dataset_size: 2276488
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
haibaraconan/tif | haibaraconan | 2025-05-03T16:12:00Z | 310 | 1 | [
"size_categories:100B<n<1T",
"modality:image",
"region:us",
"art"
] | [] | 2024-07-22T04:11:31Z | null | ---
tags:
- art
size_categories:
- 100B<n<1T
---
This directory includes a few sample datasets to get you started.
* `california_housing_data*.csv` is California housing data from the 1990 US
Census; more information is available at:
https://developers.google.com/machine-learning/crash-course/california-housing-data-description
* `mnist_*.csv` is a small sample of the
[MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is
described at: http://yann.lecun.com/exdb/mnist/
* `anscombe.json` contains a copy of
[Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it
was originally described in
Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American
Statistician. 27 (1): 17-21. JSTOR 2682899.
and our copy was prepared by the
[vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json). |
jaeyong2/Qwen3-06B-Ko-KTO | jaeyong2 | 2025-05-03T15:42:25Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T07:13:48Z | null | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 78379361
num_examples: 13526
download_size: 24394518
dataset_size: 78379361
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/no_pipeline_math_100k | mlfoundations-dev | 2025-05-03T15:39:48Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T15:39:07Z | null | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: source
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: shard_id
dtype: string
splits:
- name: train
num_bytes: 2409250240.506329
num_examples: 100000
download_size: 1062629811
dataset_size: 2409250240.506329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/no_pipeline_math_1k | mlfoundations-dev | 2025-05-03T15:38:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T15:38:44Z | null | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: source
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: shard_id
dtype: string
splits:
- name: train
num_bytes: 24092502.40506329
num_examples: 1000
download_size: 10837192
dataset_size: 24092502.40506329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
t2ance/polymnist-upd10 | t2ance | 2025-05-03T15:30:26Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-03T14:29:24Z | null | ---
dataset_info:
features:
- name: m0
dtype: image
- name: m1
dtype: image
- name: m2
dtype: image
- name: m3
dtype: image
- name: m4
dtype: image
- name: m5
dtype: image
- name: m6
dtype: image
- name: m7
dtype: image
- name: m8
dtype: image
- name: m9
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
- name: sample_id
dtype: string
splits:
- name: train
num_bytes: 632476369.0
num_examples: 50000
- name: validation
num_bytes: 126523764.0
num_examples: 10000
- name: test
num_bytes: 126347943.0
num_examples: 10000
download_size: 916377594
dataset_size: 885348076.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
gunnybd01/Momentum_smr | gunnybd01 | 2025-05-03T15:13:39Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T15:19:07Z | null | ---
dataset_info:
features:
- name: Keys
dtype: string
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 179112899
num_examples: 60000
download_size: 29686617
dataset_size: 179112899
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FrancophonIA/Vocabulaire-de-la-biologie-2017 | FrancophonIA | 2025-05-03T15:11:38Z | 6 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-28T20:15:12Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-la-biologie-2017
## Description
La Délégation générale à la langue française et aux langues de France publie pour la première fois un Vocabulaire de la biologie : 611 termes et définitions concernant des notions nouvelles dont beaucoup n’avaient pas de désignation en français. |
FrancophonIA/Vocabulaire-du-Petrole-et-du-Gaz-2015 | FrancophonIA | 2025-05-03T15:08:24Z | 6 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-28T20:19:44Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-du-Petrole-et-du-Gaz-2015
## Description
Ce vocabulaire est une édition revue et complétée de l’édition de 2007. Il reprend plus de 300 termes et définitions concernant des notions nouvelles dont la plupart n’avaient pas encore de désignation en français. |
FrancophonIA/References-2016-l-enrichissement-de-la-langue-francaise | FrancophonIA | 2025-05-03T15:07:10Z | 6 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-28T20:17:26Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/References-2016-l-enrichissement-de-la-langue-francaise
## Description
Pour permettre de nommer en français les réalités nouvelles et les innovations techniques, le dispositif d'enrichissement de la langue française mis en place par le décret du 3 juillet 1996 (et modifié par le décret du 25 mars 2015) élabore une terminologie de qualité, conforme aux règles de formation des mots, facilement compréhensible et faisant référence. |
FrancophonIA/Vocabulaire-de-la-sante-2013 | FrancophonIA | 2025-05-03T14:59:54Z | 3 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-29T20:41:50Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-la-sante-2013 |
FrancophonIA/Vocabulaire-de-l-audiovisuel-et-de-la-communication-2010 | FrancophonIA | 2025-05-03T14:43:53Z | 3 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-29T20:50:04Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-l-audiovisuel-et-de-la-communication-2010 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 144