datasetId
large_stringlengths 6
107
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-05 16:11:44
| downloads
int64 0
4.28M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-05 14:04:15
| trending_score
float64 1
39
โ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_15_quality | SayantanJoker | 2025-05-05T04:58:09Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T04:58:06Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 13278527
num_examples: 34675
download_size: 5939408
dataset_size: 13278527
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pkgforge-security/domains | pkgforge-security | 2025-05-05T03:33:05Z | 1,292 | 1 | [
"license:mit",
"size_categories:1B<n<10B",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-03-08T09:15:36Z | null | ---
size_categories:
- 100B<n<1T
pretty_name: Domains
license: mit
---
<p align="center">
<a href="https://github.com/pkgforge-security/domains">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66e82daa04200de7aa4cba26/V6PJzNaHVd_Po5E74GmmC.png" alt="pkgforge" width="256"/></a>
<b><strong> <a href="https://github.com/pkgforge-security/domains">Internet Domains</a></code></strong></b>
<br>
<img src="https://cdn-uploads.huggingface.co/production/uploads/66e82daa04200de7aa4cba26/hXVfqrL-YKJt-roX0DIgx.png" width="200" />
</p>
- ### [Domains](https://github.com/pkgforge-security/domains)
> - HuggingFace Hub Mirror for https://github.com/pkgforge-security/domains
> - The Sync Workflow actions are at: https://github.com/pkgforge-security/domains
- #### TOS & Abuse (To [Hugging-Face's](https://huggingface.co) Staff)
> Hi, if you are an offical from [Hugging-Face](https://huggingface.co) here to investigate why this Repo is so Large and are considering deleting, & terminating our Account.<br>
> Please note that, this project benefits a lot of people (You can do a code search on Github: `pkg.pkgforge.dev`) and is no way meant to abuse [Hugging-Face's](https://huggingface.co) Resources.<br>
> You can check out the main project at: [https://github.com/pkgforge/soar](https://github.com/pkgforge/soar) and see for yourself about what all this is actually about.<br>
> However if that doesn't convince you, and you feel like it's not worth it supporting this project, do reach out to us at: [https://docs.pkgforge.dev/contact/chat](https://docs.pkgforge.dev/contact/chat))<br>
> We will backup the data and delete our account to comply with the TOS. |
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_1_for_gen_12_v2 | HungVu2003 | 2025-05-04T23:57:20Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:57:19Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3020032
num_examples: 13750
download_size: 980307
dataset_size: 3020032
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GitBag/DeepSeek-R1-Distill-Qwen-1.5B_hmmt-feb-25_eval | GitBag | 2025-05-04T23:07:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:07:03Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 39227790
num_examples: 30
download_size: 13160569
dataset_size: 39227790
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kingsleykim/openthoughts_qwen_2.5_math_7b | kingsleykim | 2025-05-04T21:15:11Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:15:06Z | null | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: correctness
dtype: bool
- name: answers
dtype: string
- name: questions
dtype: string
- name: correct
dtype: bool
- name: question
dtype: string
splits:
- name: train
num_bytes: 791024669
num_examples: 191973
download_size: 124584820
dataset_size: 791024669
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mmmmmp/robot_test3 | mmmmmp | 2025-05-04T21:01:39Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-04T21:01:36Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 443,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ibragim-bad/random-data-1 | ibragim-bad | 2025-05-04T20:15:19Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:15:17Z | null | ---
dataset_info:
features:
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: created_at
dtype: timestamp[us]
- name: environment_setup_commit
dtype: string
- name: hints_text
dtype: string
- name: patch
dtype: string
- name: problem_statement
dtype: string
- name: repo
dtype: string
- name: test_patch
dtype: string
- name: meta
struct:
- name: commit_name
dtype: string
- name: failed_lite_validators
sequence: string
- name: has_test_patch
dtype: bool
- name: is_lite
dtype: bool
- name: num_modified_files
dtype: int64
- name: version
dtype: string
- name: install_config
struct:
- name: env_yml_path
sequence: string
- name: install
dtype: string
- name: log_parser
dtype: string
- name: no_use_env
dtype: bool
- name: packages
dtype: string
- name: pip_packages
sequence: string
- name: pre_install
sequence: string
- name: python
dtype: string
- name: reqs_path
sequence: string
- name: test_cmd
dtype: string
- name: requirements
dtype: string
- name: environment
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: FAIL_TO_FAIL
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: PASS_TO_FAIL
sequence: 'null'
- name: license_name
dtype: string
splits:
- name: test
num_bytes: 2540351
num_examples: 100
download_size: 884812
dataset_size: 2540351
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_1_for_gen_17_v2 | HungVu2003 | 2025-05-04T17:43:09Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:43:07Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6385788
num_examples: 13750
download_size: 3253110
dataset_size: 6385788
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hshwk1983/x_dataset_52806 | hshwk1983 | 2025-05-04T17:09:03Z | 2,337 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T00:12:13Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** hshwk1983/x_dataset_52806
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Fc1UBhNfu7MCTK51dvY7Yukt1ZLjAbNW832GirVQcLE3Wj2
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{hshwk19832025datauniversex_dataset_52806,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={hshwk1983},
year={2025},
url={https://huggingface.co/datasets/hshwk1983/x_dataset_52806},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 53704103
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-13T00:00:00Z
- **Last Updated:** 2025-02-18T18:20:54Z
### Data Distribution
- Tweets with hashtags: 48.13%
- Tweets without hashtags: 51.87%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 27855573 | 51.87% |
| 2 | #riyadh | 382358 | 0.71% |
| 3 | #zelena | 324161 | 0.60% |
| 4 | #tiktok | 246981 | 0.46% |
| 5 | #bbb25 | 141261 | 0.26% |
| 6 | #ad | 136046 | 0.25% |
| 7 | #superbowl | 106457 | 0.20% |
| 8 | #bbmzansi | 91608 | 0.17% |
| 9 | #jhope_at_galadespiรจcesjaunes | 91561 | 0.17% |
| 10 | #transferlerlebirliktezafere | 80161 | 0.15% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T00:12:58Z | 2331611 | 2331611 |
| 2025-01-30T12:15:15Z | 7216791 | 9548402 |
| 2025-02-03T00:17:43Z | 7388142 | 16936544 |
| 2025-02-06T12:21:06Z | 8811525 | 25748069 |
| 2025-02-10T00:25:18Z | 10398234 | 36146303 |
| 2025-02-13T13:59:25Z | 4703197 | 40849500 |
| 2025-02-17T02:19:40Z | 11306944 | 52156444 |
| 2025-02-18T03:19:10Z | 829790 | 52986234 |
| 2025-02-18T18:20:54Z | 717869 | 53704103 |
|
Joseph7D/emotion-dataset | Joseph7D | 2025-05-04T16:34:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T16:34:09Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: emotion
dtype: string
splits:
- name: train
num_bytes: 3013664
num_examples: 26928
- name: test
num_bytes: 372292
num_examples: 3366
- name: validation
num_bytes: 378972
num_examples: 3366
download_size: 2318145
dataset_size: 3764928
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
GitBag/Qwen2.5-7B_hmmt-feb-24_eval | GitBag | 2025-05-04T16:30:26Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T16:30:24Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 5449227
num_examples: 30
download_size: 1476657
dataset_size: 5449227
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
harpreetmann/go_emotions_max_500_string_chat | harpreetmann | 2025-05-04T16:14:55Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T16:14:49Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 27852466
num_examples: 43409
- name: test
num_bytes: 3488513
num_examples: 5427
- name: validation
num_bytes: 3487936
num_examples: 5426
- name: discarded
num_bytes: 3483
num_examples: 1
download_size: 8204171
dataset_size: 34832398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: discarded
path: data/discarded-*
---
|
mteb/told-br | mteb | 2025-05-04T16:12:17Z | 527 | 0 | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"language:por",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.04543",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-10-29T07:49:35Z | null | ---
annotations_creators:
- expert-annotated
language:
- por
license: cc-by-sa-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- multi-label-classification
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: homophobia
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: obscene
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: insult
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: racism
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: misogyny
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: xenophobia
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
splits:
- name: train
num_bytes: 2977994
num_examples: 21000
download_size: 1464831
dataset_size: 2977994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">BrazilianToxicTweetsClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from
a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity,
sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia,
Xenophobia, Obscene, Insult, Misogyny and Racism.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Constructed, Written |
| Reference | https://paperswithcode.com/dataset/told-br |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["BrazilianToxicTweetsClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{DBLP:journals/corr/abs-2010-04543,
author = {Joao Augusto Leite and
Diego F. Silva and
Kalina Bontcheva and
Carolina Scarton},
eprint = {2010.04543},
eprinttype = {arXiv},
journal = {CoRR},
timestamp = {Tue, 15 Dec 2020 16:10:16 +0100},
title = {Toxic Language Detection in Social Media for Brazilian Portuguese:
New Dataset and Multilingual Analysis},
url = {https://arxiv.org/abs/2010.04543},
volume = {abs/2010.04543},
year = {2020},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("BrazilianToxicTweetsClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2048,
"number_of_characters": 172708,
"number_texts_intersect_with_train": 23,
"min_text_length": 5,
"average_text_length": 84.330078125,
"max_text_length": 304,
"unique_texts": 2046,
"min_labels_per_text": 0,
"average_label_per_text": 0.57958984375,
"max_labels_per_text": 4,
"unique_labels": 7,
"labels": {
"obscene": {
"count": 653
},
"insult": {
"count": 430
},
"misogyny": {
"count": 46
},
"racism": {
"count": 13
},
"xenophobia": {
"count": 13
},
"homophobia": {
"count": 32
},
"None": {
"count": 1145
}
}
},
"train": {
"num_samples": 8192,
"number_of_characters": 714281,
"number_texts_intersect_with_train": null,
"min_text_length": 4,
"average_text_length": 87.1925048828125,
"max_text_length": 322,
"unique_texts": 8172,
"min_labels_per_text": 0,
"average_label_per_text": 0.5751953125,
"max_labels_per_text": 4,
"unique_labels": 7,
"labels": {
"None": {
"count": 4580
},
"obscene": {
"count": 2576
},
"insult": {
"count": 1700
},
"homophobia": {
"count": 139
},
"misogyny": {
"count": 179
},
"racism": {
"count": 54
},
"xenophobia": {
"count": 64
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/MMarcoRetrieval | mteb | 2025-05-04T16:11:36Z | 26 | 0 | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"language:cmn",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2309.07597",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-11-28T10:47:15Z | null | ---
language:
- cmn
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids: []
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: dev
num_bytes: 32979607
num_examples: 106813
download_size: 20219994
dataset_size: 32979607
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 217670
num_examples: 7437
download_size: 114126
dataset_size: 217670
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: dev
num_bytes: 303316
num_examples: 6980
download_size: 219572
dataset_size: 303316
configs:
- config_name: corpus
data_files:
- split: dev
path: corpus/dev-*
- config_name: default
data_files:
- split: dev
path: data/dev-*
- config_name: queries
data_files:
- split: dev
path: queries/dev-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MMarcoRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
MMarcoRetrieval
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | None |
| Reference | https://arxiv.org/abs/2309.07597 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["MMarcoRetrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{xiao2024cpack,
archiveprefix = {arXiv},
author = {Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff and Defu Lian and Jian-Yun Nie},
eprint = {2309.07597},
primaryclass = {cs.CL},
title = {C-Pack: Packaged Resources To Advance General Chinese Embedding},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("MMarcoRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"dev": {
"num_samples": 113793,
"number_of_characters": 12294685,
"num_documents": 106813,
"min_document_length": 13,
"average_document_length": 114.41787048392986,
"max_document_length": 1709,
"unique_documents": 106813,
"num_queries": 6980,
"min_query_length": 2,
"average_query_length": 10.51131805157593,
"max_query_length": 61,
"unique_queries": 6980,
"none_queries": 0,
"num_relevant_docs": 7437,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.0654727793696275,
"max_relevant_docs_per_query": 4,
"unique_relevant_docs": 7433,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/NQ_PL_test_top_250_only_w_correct-v2 | mteb | 2025-05-04T16:11:21Z | 22 | 0 | [
"task_categories:text-retrieval",
"multilinguality:translated",
"source_datasets:mteb/nq",
"language:pol",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2305.19840",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-09-28T05:31:56Z | null | ---
language:
- pol
multilinguality: translated
source_datasets:
- mteb/nq
task_categories:
- text-retrieval
task_ids: []
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 101910982.7251677
num_examples: 184765
download_size: 80088058
dataset_size: 101910982.7251677
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 38495.78647940967
num_examples: 1213
download_size: 16497
dataset_size: 38495.78647940967
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 66693.80069524913
num_examples: 1000
download_size: 46127
dataset_size: 66693.80069524913
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NQ-PLHardNegatives</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Natural Questions: A Benchmark for Question Answering Research. The hard negative version has been created by pooling the 250 top documents per query from BM25, e5-multilingual-large and e5-mistral-instruct.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | None |
| Reference | https://ai.google.com/research/NaturalQuestions/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["NQ-PLHardNegatives"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{wojtasik2024beirpl,
archiveprefix = {arXiv},
author = {Konrad Wojtasik and Vadim Shishkin and Kacper Woลowiec and Arkadiusz Janz and Maciej Piasecki},
eprint = {2305.19840},
primaryclass = {cs.IR},
title = {BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("NQ-PLHardNegatives")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 185765,
"number_of_characters": 113077430,
"num_documents": 184765,
"min_document_length": 5,
"average_document_length": 611.7449138094336,
"max_document_length": 14247,
"unique_documents": 184765,
"num_queries": 1000,
"min_query_length": 18,
"average_query_length": 48.381,
"max_query_length": 106,
"unique_queries": 1000,
"none_queries": 0,
"num_relevant_docs": 1213,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.213,
"max_relevant_docs_per_query": 4,
"unique_relevant_docs": 1213,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/NFCorpus-PL | mteb | 2025-05-04T16:11:19Z | 15 | 0 | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"source_datasets:mteb/nfcorpus",
"language:pol",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2305.19840",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2025-02-05T18:35:26Z | null | ---
language:
- pol
multilinguality: monolingual
source_datasets:
- mteb/nfcorpus
task_categories:
- text-retrieval
task_ids: []
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 6391874
num_examples: 3633
download_size: 3631211
dataset_size: 6391874
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 415220
num_examples: 12334
download_size: 79948
dataset_size: 415220
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 13938
num_examples: 323
download_size: 10996
dataset_size: 13938
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NFCorpus-PL</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
NFCorpus: A Full-Text Learning to Rank Dataset for Medical Information Retrieval
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | None |
| Reference | https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["NFCorpus-PL"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{wojtasik2024beirpl,
archiveprefix = {arXiv},
author = {Konrad Wojtasik and Vadim Shishkin and Kacper Woลowiec and Arkadiusz Janz and Maciej Piasecki},
eprint = {2305.19840},
primaryclass = {cs.IR},
title = {BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("NFCorpus-PL")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 3956,
"number_of_characters": 6013927,
"num_documents": 3633,
"min_document_length": 110,
"average_document_length": 1653.1926782273604,
"max_document_length": 10705,
"unique_documents": 3633,
"num_queries": 323,
"min_query_length": 3,
"average_query_length": 24.390092879256965,
"max_query_length": 96,
"unique_queries": 323,
"none_queries": 0,
"num_relevant_docs": 12334,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 38.18575851393189,
"max_relevant_docs_per_query": 475,
"unique_relevant_docs": 3128,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/norquad_retrieval | mteb | 2025-05-04T16:10:52Z | 16 | 0 | [
"task_categories:text-retrieval",
"task_ids:multiple-choice-qa",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:nob",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-04-03T11:56:24Z | null | ---
annotations_creators:
- derived
language:
- nob
license: cc-by-sa-4.0
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- multiple-choice-qa
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 2350752
num_examples: 1024
- name: val
num_bytes: 588509
num_examples: 256
- name: test
num_bytes: 4687874
num_examples: 2048
download_size: 3083620
dataset_size: 7627135
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NorQuadRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Human-created question for Norwegian wikipedia passages.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Encyclopaedic, Non-fiction, Written |
| Reference | https://aclanthology.org/2023.nodalida-1.17/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["NorQuadRetrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{ivanova-etal-2023-norquad,
abstract = {In this paper we present NorQuAD: the first Norwegian question answering dataset for machine reading comprehension. The dataset consists of 4,752 manually created question-answer pairs. We here detail the data collection procedure and present statistics of the dataset. We also benchmark several multilingual and Norwegian monolingual language models on the dataset and compare them against human performance. The dataset will be made freely available.},
address = {T{\'o}rshavn, Faroe Islands},
author = {Ivanova, Sardana and
Andreassen, Fredrik and
Jentoft, Matias and
Wold, Sondre and
{\O}vrelid, Lilja},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
editor = {Alum{\"a}e, Tanel and
Fishel, Mark},
month = may,
pages = {159--168},
publisher = {University of Tartu Library},
title = {{N}or{Q}u{AD}: {N}orwegian Question Answering Dataset},
url = {https://aclanthology.org/2023.nodalida-1.17},
year = {2023},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("NorQuadRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2072,
"number_of_characters": 273854,
"num_documents": 1048,
"min_document_length": 1,
"average_document_length": 214.5114503816794,
"max_document_length": 2606,
"unique_documents": 1048,
"num_queries": 1024,
"min_query_length": 11,
"average_query_length": 47.896484375,
"max_query_length": 100,
"unique_queries": 1024,
"none_queries": 0,
"num_relevant_docs": 2048,
"min_relevant_docs_per_query": 2,
"average_relevant_docs_per_query": 2.0,
"max_relevant_docs_per_query": 2,
"unique_relevant_docs": 1328,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/mrtidy | mteb | 2025-05-04T16:10:44Z | 999 | 0 | [
"task_categories:text-retrieval",
"annotations_creators:human-annotated",
"multilinguality:multilingual",
"language:ara",
"language:ben",
"language:eng",
"language:fin",
"language:ind",
"language:jpn",
"language:kor",
"language:rus",
"language:swa",
"language:tel",
"language:tha",
"license:cc-by-sa-3.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2108.08787",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-09-29T11:14:01Z | null | ---
annotations_creators:
- human-annotated
language:
- ara
- ben
- eng
- fin
- ind
- jpn
- kor
- rus
- swa
- tel
- tha
license: cc-by-sa-3.0
multilinguality: multilingual
task_categories:
- text-retrieval
task_ids: []
dataset_info:
- config_name: arabic-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1245714994
num_examples: 2106586
download_size: 562700591
dataset_size: 1245714994
- config_name: arabic-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 10793262
num_examples: 374523
- name: dev
num_bytes: 87088
num_examples: 3115
- name: test
num_bytes: 36208
num_examples: 1257
download_size: 3781951
dataset_size: 10916558
- config_name: arabic-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 831496
num_examples: 12377
- name: dev
num_bytes: 207878
num_examples: 3115
- name: test
num_bytes: 74696
num_examples: 1081
download_size: 621343
dataset_size: 1114070
- config_name: bengali-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 313718344
num_examples: 304059
download_size: 113214290
dataset_size: 313718344
- config_name: bengali-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 1410063
num_examples: 51861
- name: dev
num_bytes: 11764
num_examples: 443
- name: test
num_bytes: 3548
num_examples: 130
download_size: 479618
dataset_size: 1425375
- config_name: bengali-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 233828
num_examples: 1713
- name: dev
num_bytes: 60547
num_examples: 440
- name: test
num_bytes: 16464
num_examples: 111
download_size: 135025
dataset_size: 310839
- config_name: english-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13953017759
num_examples: 32907100
download_size: 8012208332
dataset_size: 13953017759
- config_name: english-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 3163945
num_examples: 108401
- name: dev
num_bytes: 24984
num_examples: 878
- name: test
num_bytes: 26591
num_examples: 935
download_size: 1201824
dataset_size: 3215520
- config_name: english-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 180772
num_examples: 3547
- name: dev
num_bytes: 45396
num_examples: 878
- name: test
num_bytes: 39063
num_examples: 744
download_size: 179373
dataset_size: 265231
- config_name: finnish-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 749089306
num_examples: 1908757
download_size: 446059094
dataset_size: 749089306
- config_name: finnish-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 5510842
num_examples: 197338
- name: dev
num_bytes: 47540
num_examples: 1738
- name: test
num_bytes: 39723
num_examples: 1451
download_size: 1870069
dataset_size: 5598105
- config_name: finnish-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 336371
num_examples: 6561
- name: dev
num_bytes: 89446
num_examples: 1738
- name: test
num_bytes: 63723
num_examples: 1254
download_size: 335034
dataset_size: 489540
- config_name: indonesian-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 545994305
num_examples: 1469399
download_size: 276886119
dataset_size: 545994305
- config_name: indonesian-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 4169295
num_examples: 148491
- name: dev
num_bytes: 33405
num_examples: 1224
- name: test
num_bytes: 26807
num_examples: 961
download_size: 1438860
dataset_size: 4229507
- config_name: indonesian-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 241258
num_examples: 4902
- name: dev
num_bytes: 60988
num_examples: 1224
- name: test
num_bytes: 43451
num_examples: 829
download_size: 221700
dataset_size: 345697
- config_name: japanese-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: dev
num_bytes: 387783.4788097246
num_examples: 903
- name: test
num_bytes: 377906.3802353905
num_examples: 880
- name: train
num_bytes: 3006085074
num_examples: 7000027
download_size: 1692453219
dataset_size: 3006850763.859045
- config_name: japanese-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 3198715
num_examples: 112408
- name: dev
num_bytes: 25703
num_examples: 928
- name: test
num_bytes: 25819
num_examples: 923
download_size: 1127195
dataset_size: 3250237
- config_name: japanese-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 236537
num_examples: 3697
- name: dev
num_bytes: 60007
num_examples: 928
- name: test
num_bytes: 47149
num_examples: 720
download_size: 210465
dataset_size: 343693
- config_name: korean-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 637416518
num_examples: 1496126
download_size: 364890926
dataset_size: 637416518
- config_name: korean-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 1083048
num_examples: 39485
- name: dev
num_bytes: 8121
num_examples: 307
- name: test
num_bytes: 13482
num_examples: 492
download_size: 386327
dataset_size: 1104651
- config_name: korean-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 86921
num_examples: 1295
- name: dev
num_bytes: 20056
num_examples: 303
- name: test
num_bytes: 28967
num_examples: 421
download_size: 87049
dataset_size: 135944
- config_name: russian-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5965881322
num_examples: 9597504
download_size: 2799247131
dataset_size: 5965881322
- config_name: russian-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 4701206
num_examples: 163899
- name: dev
num_bytes: 38306
num_examples: 1375
- name: test
num_bytes: 33048
num_examples: 1168
download_size: 1735174
dataset_size: 4772560
- config_name: russian-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 502545
num_examples: 5366
- name: dev
num_bytes: 128310
num_examples: 1375
- name: test
num_bytes: 97183
num_examples: 995
download_size: 413272
dataset_size: 728038
- config_name: swahili-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 34142833
num_examples: 136689
download_size: 17501035
dataset_size: 34142833
- config_name: swahili-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 1689502
num_examples: 63390
- name: dev
num_bytes: 16381
num_examples: 623
- name: test
num_bytes: 19984
num_examples: 743
download_size: 549763
dataset_size: 1725867
- config_name: swahili-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 105403
num_examples: 2072
- name: dev
num_bytes: 26676
num_examples: 526
- name: test
num_bytes: 36137
num_examples: 670
download_size: 100853
dataset_size: 168216
- config_name: telugu-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 563646332
num_examples: 548224
download_size: 156110911
dataset_size: 563646332
- config_name: telugu-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 3249925
num_examples: 117793
- name: dev
num_bytes: 26450
num_examples: 983
- name: test
num_bytes: 18434
num_examples: 677
download_size: 964408
dataset_size: 3294809
- config_name: telugu-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 446743
num_examples: 3880
- name: dev
num_bytes: 112859
num_examples: 983
- name: test
num_bytes: 73365
num_examples: 646
download_size: 253930
dataset_size: 632967
- config_name: thai-corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: dev
num_bytes: 574630351
num_examples: 568855
- name: test
num_bytes: 574630351
num_examples: 568855
- name: train
num_bytes: 574630351
num_examples: 568855
download_size: 845293048
dataset_size: 1723891053
- config_name: thai-qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 2775220
num_examples: 100472
- name: dev
num_bytes: 22050
num_examples: 817
- name: test
num_bytes: 37576
num_examples: 1368
download_size: 934865
dataset_size: 2834846
- config_name: thai-queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 443382
num_examples: 3319
- name: dev
num_bytes: 108804
num_examples: 807
- name: test
num_bytes: 158612
num_examples: 1190
download_size: 320820
dataset_size: 710798
configs:
- config_name: arabic-corpus
data_files:
- split: train
path: arabic-corpus/train-*
- config_name: arabic-qrels
data_files:
- split: train
path: arabic-qrels/train-*
- split: dev
path: arabic-qrels/dev-*
- split: test
path: arabic-qrels/test-*
- config_name: arabic-queries
data_files:
- split: train
path: arabic-queries/train-*
- split: dev
path: arabic-queries/dev-*
- split: test
path: arabic-queries/test-*
- config_name: bengali-corpus
data_files:
- split: train
path: bengali-corpus/train-*
- config_name: bengali-qrels
data_files:
- split: train
path: bengali-qrels/train-*
- split: dev
path: bengali-qrels/dev-*
- split: test
path: bengali-qrels/test-*
- config_name: bengali-queries
data_files:
- split: train
path: bengali-queries/train-*
- split: dev
path: bengali-queries/dev-*
- split: test
path: bengali-queries/test-*
- config_name: english-corpus
data_files:
- split: train
path: english-corpus/train-*
- config_name: english-qrels
data_files:
- split: train
path: english-qrels/train-*
- split: dev
path: english-qrels/dev-*
- split: test
path: english-qrels/test-*
- config_name: english-queries
data_files:
- split: train
path: english-queries/train-*
- split: dev
path: english-queries/dev-*
- split: test
path: english-queries/test-*
- config_name: finnish-corpus
data_files:
- split: train
path: finnish-corpus/train-*
- config_name: finnish-qrels
data_files:
- split: train
path: finnish-qrels/train-*
- split: dev
path: finnish-qrels/dev-*
- split: test
path: finnish-qrels/test-*
- config_name: finnish-queries
data_files:
- split: train
path: finnish-queries/train-*
- split: dev
path: finnish-queries/dev-*
- split: test
path: finnish-queries/test-*
- config_name: indonesian-corpus
data_files:
- split: train
path: indonesian-corpus/train-*
- config_name: indonesian-qrels
data_files:
- split: train
path: indonesian-qrels/train-*
- split: dev
path: indonesian-qrels/dev-*
- split: test
path: indonesian-qrels/test-*
- config_name: indonesian-queries
data_files:
- split: train
path: indonesian-queries/train-*
- split: dev
path: indonesian-queries/dev-*
- split: test
path: indonesian-queries/test-*
- config_name: japanese-corpus
data_files:
- split: train
path: japanese-corpus/train-*
- split: dev
path: japanese-corpus/dev-*
- split: test
path: japanese-corpus/test-*
- config_name: japanese-qrels
data_files:
- split: train
path: japanese-qrels/train-*
- split: dev
path: japanese-qrels/dev-*
- split: test
path: japanese-qrels/test-*
- config_name: japanese-queries
data_files:
- split: train
path: japanese-queries/train-*
- split: dev
path: japanese-queries/dev-*
- split: test
path: japanese-queries/test-*
- config_name: korean-corpus
data_files:
- split: train
path: korean-corpus/train-*
- config_name: korean-qrels
data_files:
- split: train
path: korean-qrels/train-*
- split: dev
path: korean-qrels/dev-*
- split: test
path: korean-qrels/test-*
- config_name: korean-queries
data_files:
- split: train
path: korean-queries/train-*
- split: dev
path: korean-queries/dev-*
- split: test
path: korean-queries/test-*
- config_name: russian-corpus
data_files:
- split: train
path: russian-corpus/train-*
- config_name: russian-qrels
data_files:
- split: train
path: russian-qrels/train-*
- split: dev
path: russian-qrels/dev-*
- split: test
path: russian-qrels/test-*
- config_name: russian-queries
data_files:
- split: train
path: russian-queries/train-*
- split: dev
path: russian-queries/dev-*
- split: test
path: russian-queries/test-*
- config_name: swahili-corpus
data_files:
- split: train
path: swahili-corpus/train-*
- config_name: swahili-qrels
data_files:
- split: train
path: swahili-qrels/train-*
- split: dev
path: swahili-qrels/dev-*
- split: test
path: swahili-qrels/test-*
- config_name: swahili-queries
data_files:
- split: train
path: swahili-queries/train-*
- split: dev
path: swahili-queries/dev-*
- split: test
path: swahili-queries/test-*
- config_name: telugu-corpus
data_files:
- split: train
path: telugu-corpus/train-*
- config_name: telugu-qrels
data_files:
- split: train
path: telugu-qrels/train-*
- split: dev
path: telugu-qrels/dev-*
- split: test
path: telugu-qrels/test-*
- config_name: telugu-queries
data_files:
- split: train
path: telugu-queries/train-*
- split: dev
path: telugu-queries/dev-*
- split: test
path: telugu-queries/test-*
- config_name: thai-corpus
data_files:
- split: train
path: thai-corpus/train-*
- split: dev
path: thai-corpus/dev-*
- split: test
path: thai-corpus/test-*
- config_name: thai-qrels
data_files:
- split: train
path: thai-qrels/train-*
- split: dev
path: thai-qrels/dev-*
- split: test
path: thai-qrels/test-*
- config_name: thai-queries
data_files:
- split: train
path: thai-queries/train-*
- split: dev
path: thai-queries/dev-*
- split: test
path: thai-queries/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MrTidyRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Encyclopaedic, Written |
| Reference | https://huggingface.co/datasets/castorini/mr-tydi |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["MrTidyRetrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{mrtydi,
author = {Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
journal = {arXiv:2108.08787},
title = {{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("MrTidyRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 58051987,
"number_of_characters": 20129435459,
"num_documents": 58043326,
"min_document_length": 2,
"average_document_length": 346.79462777512094,
"max_document_length": 61659,
"unique_documents": 58043326,
"num_queries": 8661,
"min_query_length": 6,
"average_query_length": 37.15783396836393,
"max_query_length": 144,
"unique_queries": 8661,
"none_queries": 0,
"num_relevant_docs": 10105,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.1667243967209329,
"max_relevant_docs_per_query": 3,
"unique_relevant_docs": 8926,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/fiqa | mteb | 2025-05-04T16:10:12Z | 1,762 | 2 | [
"task_categories:text-retrieval",
"task_ids:multiple-choice-qa",
"annotations_creators:human-annotated",
"multilinguality:monolingual",
"language:eng",
"license:unknown",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-03-02T20:35:06Z | null | ---
annotations_creators:
- human-annotated
language:
- eng
license: unknown
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- multiple-choice-qa
config_names:
- corpus
tags:
- mteb
- text
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 365642
num_examples: 14166
- name: dev
num_bytes: 31919
num_examples: 1238
- name: test
num_bytes: 43996
num_examples: 1706
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 45303212
num_examples: 57638
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 491278
num_examples: 6648
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: dev
path: qrels/dev.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">FiQA2018</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Financial Opinion Mining and Question Answering
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Written, Financial |
| Reference | https://sites.google.com/view/fiqa/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["FiQA2018"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{thakur2021beir,
author = {Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle = {Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
title = {{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
url = {https://openreview.net/forum?id=wCu6T5xFjeJ},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("FiQA2018")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"train": {
"num_samples": 63138,
"number_of_characters": 44616372,
"num_documents": 57638,
"min_document_length": 1,
"average_document_length": 768.2108157812554,
"max_document_length": 16991,
"unique_documents": 57638,
"num_queries": 5500,
"min_query_length": 14,
"average_query_length": 61.49763636363636,
"max_query_length": 158,
"unique_queries": 5500,
"none_queries": 0,
"num_relevant_docs": 14166,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 2.5756363636363635,
"max_relevant_docs_per_query": 23,
"unique_relevant_docs": 14166,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
},
"dev": {
"num_samples": 58138,
"number_of_characters": 44309513,
"num_documents": 57638,
"min_document_length": 1,
"average_document_length": 768.2108157812554,
"max_document_length": 16991,
"unique_documents": 57638,
"num_queries": 500,
"min_query_length": 15,
"average_query_length": 62.756,
"max_query_length": 166,
"unique_queries": 500,
"none_queries": 0,
"num_relevant_docs": 1238,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 2.476,
"max_relevant_docs_per_query": 20,
"unique_relevant_docs": 1238,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
},
"test": {
"num_samples": 58286,
"number_of_characters": 44318767,
"num_documents": 57638,
"min_document_length": 1,
"average_document_length": 768.2108157812554,
"max_document_length": 16991,
"unique_documents": 57638,
"num_queries": 648,
"min_query_length": 16,
"average_query_length": 62.7037037037037,
"max_query_length": 147,
"unique_queries": 648,
"none_queries": 0,
"num_relevant_docs": 1706,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 2.632716049382716,
"max_relevant_docs_per_query": 15,
"unique_relevant_docs": 1706,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/talemaader_pc | mteb | 2025-05-04T16:09:22Z | 2 | 0 | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:dan",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-02T09:08:59Z | null | ---
annotations_creators:
- derived
language:
- dan
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 390123
num_examples: 4000
download_size: 173048
dataset_size: 390123
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">TalemaaderPC</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
The Danish Language and Literature Society has developed a dataset for evaluating language models in Danish.
The dataset contains a total of 1000 Danish idioms and fixed expressions with transferred meanings based on the Danish Dictionary's collection of fixed expressions with associated definitions.
For each of the 1000 idioms and fixed expressions, three false definitions have also been prepared.
The dataset can be used to test the performance of language models in identifying correct definitions for Danish idioms and fixed expressions.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Academic, Written |
| Reference | https://sprogteknologi.dk/dataset/1000-talemader-evalueringsdatasaet |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["TalemaaderPC"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{DSLDK1000Talemader,
author = {{Det Danske Sprog- og Litteraturselskab}},
howpublished = {Sprogteknologi.dk},
language = {Danish},
note = {CC-BY licensed dataset of 1000 Danish sayings and expressions},
publisher = {Digitaliseringsstyrelsen \& Det Danske Sprog- og Litteraturselskab},
title = {1000 danske talemรฅder - evalueringsdatasรฆt},
url = {https://sprogteknologi.dk/dataset/1000-talemader-evalueringsdatasaet},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("TalemaaderPC")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 4000,
"number_of_characters": 315501,
"unique_pairs": 4000,
"min_sentence1_length": 7,
"avg_sentence1_length": 23.801,
"max_sentence1_length": 95,
"unique_sentence1": 1000,
"min_sentence2_length": 2,
"avg_sentence2_length": 55.07425,
"max_sentence2_length": 242,
"unique_sentence2": 2990,
"unique_labels": 2,
"labels": {
"1": {
"count": 1000
},
"0": {
"count": 3000
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/UrduRomanSentimentClassification | mteb | 2025-05-04T16:08:30Z | 27 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:urd",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-12-21T12:09:02Z | null | ---
annotations_creators:
- derived
language:
- urd
license: mit
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 166310
num_examples: 2048
download_size: 126737
dataset_size: 166310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">UrduRomanSentimentClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
The Roman Urdu dataset is a data corpus comprising of more than 20000 records tagged for sentiment (Positive, Negative, Neutral)
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Social, Written |
| Reference | https://archive.ics.uci.edu/dataset/458/roman+urdu+data+set |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["UrduRomanSentimentClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{misc_roman_urdu_data_set_458,
author = {Sharf,Zareen},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: https://doi.org/10.24432/C58325},
title = {{Roman Urdu Data Set}},
year = {2018},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Mรกrton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiลski and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrรธm and Roman Solomatin and รmer รaฤatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafaล Poลwiata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Bjรถrn Plรผster and Jan Philipp Harries and Loรฏc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek ล uppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Gรผnther and Mengzhou Xia and Weijia Shi and Xing Han Lรน and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("UrduRomanSentimentClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"train": {
"num_samples": 2048,
"number_of_characters": 140832,
"number_texts_intersect_with_train": null,
"min_text_length": 1,
"average_text_length": 68.765625,
"max_text_length": 936,
"unique_text": 2038,
"unique_labels": 3,
"labels": {
"2": {
"count": 904
},
"1": {
"count": 535
},
"0": {
"count": 609
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
liyitenga/lekiwi2_giveme5_1 | liyitenga | 2025-05-04T15:25:18Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"lekiwi",
"lekiwi_action"
] | [
"robotics"
] | 2025-05-04T15:23:52Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- lekiwi
- lekiwi_action
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi2",
"total_episodes": 20,
"total_frames": 13259,
"total_tasks": 1,
"total_videos": 40,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
15
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
15
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
shylee/eval_DP_cube_downDims1_cropNo_freeze1_16_16_ema0_1e-4_ckpt180000 | shylee | 2025-05-04T15:01:17Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-04T15:01:11Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 321,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Mohsen21/Updated_EAD_MALE_FEMALE_DATA | Mohsen21 | 2025-05-04T14:07:38Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T14:06:45Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1200893246.0
num_examples: 730
download_size: 1170770119
dataset_size: 1200893246.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
littleGuagua/x_dataset_48558 | littleGuagua | 2025-05-04T13:13:16Z | 1,014 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T14:58:15Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_48558
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5ERFRy1NBaxrJ8WpkjKeWwgx79NxiVoEqmL3m5tEWsDHwjtD
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_48558,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_48558},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 46034420
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-11T00:00:00Z
- **Last Updated:** 2025-02-18T22:02:52Z
### Data Distribution
- Tweets with hashtags: 36.14%
- Tweets without hashtags: 63.86%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 29399695 | 63.86% |
| 2 | #riyadh | 280983 | 0.61% |
| 3 | #zelena | 221844 | 0.48% |
| 4 | #tiktok | 163569 | 0.36% |
| 5 | #bbb25 | 128439 | 0.28% |
| 6 | #ad | 96138 | 0.21% |
| 7 | #bbmzansi | 59564 | 0.13% |
| 8 | #jhope_at_galadespiรจcesjaunes | 58496 | 0.13% |
| 9 | #granhermano | 52866 | 0.11% |
| 10 | #pr | 50398 | 0.11% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T14:58:37Z | 837393 | 837393 |
| 2025-01-30T03:14:08Z | 8566588 | 9403981 |
| 2025-02-02T15:17:25Z | 8569868 | 17973849 |
| 2025-02-06T03:21:34Z | 10709950 | 28683799 |
| 2025-02-09T15:24:35Z | 7218900 | 35902699 |
| 2025-02-13T03:32:24Z | 8679209 | 44581908 |
| 2025-02-18T07:01:46Z | 795937 | 45377845 |
| 2025-02-18T22:02:52Z | 656575 | 46034420 |
|
ShoAnn/legalqa_klinik_hukumonline | ShoAnn | 2025-05-04T13:01:51Z | 117 | 1 | [
"language:id",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-11T02:29:44Z | null | ---
language:
- id
license: apache-2.0
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
list:
- name: full_text
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
splits:
- name: train
num_bytes: 12220706
num_examples: 1006
- name: test
num_bytes: 1393760
num_examples: 112
download_size: 5169384
dataset_size: 13614466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Pertanyaan-Jawaban-Dasar_Hukum (Question-Answer-Legal_Basis)
## Topic
Hukum Pidana Republik Indonesia (Criminal Law)
## Date range
November 2013 - October 2024
## Source
Question-Answer-Legal basis reference: [Klinik Hukumonline.com](https://www.hukumonline.com/klinik/pidana) \
Legal basis Sources: Indonesia Official Statutory documents (from sites such as: [Peraturan BPK](https://peraturan.bpk.go.id/), [Peraturan.go.id](peraturan.go.id), [Paralegal](paralegal.id), etc.), Books, and others (articles, expert's opinions, etc) |
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_1_for_gen_8_v2 | HungVu2003 | 2025-05-04T12:26:22Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T12:26:20Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2445375
num_examples: 13750
download_size: 984172
dataset_size: 2445375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yunusserhat/tea_sickness_dataset | yunusserhat | 2025-05-04T09:45:50Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T08:03:08Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Anthracnose
'1': algal_leaf
'2': bird_eye_spot
'3': brown_blight
'4': gray_light
'5': healthy
'6': red_leaf_spot
'7': white_spot
splits:
- name: train
num_bytes: 622082045.0
num_examples: 708
- name: validation
num_bytes: 79334678.0
num_examples: 88
- name: test
num_bytes: 79495048.0
num_examples: 89
download_size: 780933256
dataset_size: 780911771.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Tea Sickness Dataset
This dataset contains images of tea leaves affected by seven common diseases, along with healthy examples. It is designed for use in machine learning tasks such as classification, particularly leveraging transfer learning techniques.
## Dataset Summary
The dataset includes tea leaves from eight categories:
| Label ID | Class Name |
|----------|------------------|
| 0 | Anthracnose |
| 1 | Algal Leaf Spot |
| 2 | Bird Eye Spot |
| 3 | Brown Blight |
| 4 | Gray Blight |
| 5 | Healthy |
| 6 | Red Leaf Spot |
| 7 | White Spot |
Each class contains more than 100 images, captured from tea plants in the Johnstone Boiyon farm, Koiwa location, Bomet County, using a clone of 1510. The dataset is useful for developing models that can predict the presence of diseases in tea leaves, particularly in agricultural and sustainability contexts.
## Dataset Structure
### Features
- `image`: An RGB image of a tea leaf (PIL Image format)
- `label`: Class label as integer (0โ7) with corresponding disease names
### Splits
| Split | Number of Examples | Size (Bytes) |
|-------------|--------------------|--------------|
| Train | 708 | 622,082,045 |
| Validation | 88 | 79,334,678 |
| Test | 89 | 79,495,048 |
| **Total** | **885** | **780,911,771** |
## Usage
This dataset is suitable for:
- Fine-tuning image classification models
- Research on plant disease detection
- Transfer learning and domain adaptation experiments
## Citation
If you use this dataset in your research, please cite it as:
```bibtex
@article{kimutai2022tea,
title = {Tea sickness dataset},
author = {Kimutai, Gibson and Fรถrster, Anna},
journal = {Mendeley Data},
volume = {2},
year = {2022},
doi = {10.17632/j32xdt2ff5.2}
}
```
## License
**CC BY 4.0** โ You are free to use, share, and adapt the dataset, provided appropriate credit is given.
## Acknowledgements
This dataset was collected by researchers from the University of Rwanda and the University of Bremen. We thank the contributors for making the dataset publicly available to advance agricultural AI applications.
|
alchemistyzz/mathverse_test | alchemistyzz | 2025-05-04T07:35:32Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T09:58:31Z | null | ---
license: apache-2.0
---
|
kaikkk/eval_act_so100_test1 | kaikkk | 2025-05-04T06:26:29Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-04T06:24:50Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 1184,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
abehandlerorg/ccnews_blocks | abehandlerorg | 2025-05-04T06:21:33Z | 0 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T17:31:57Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_date
dtype: timestamp[ns, tz=UTC]
- name: tokens
sequence: uint16
splits:
- name: train
num_bytes: 517287642756.7698
num_examples: 104657712
download_size: 357497372525
dataset_size: 517287642756.7698
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_att_iter0_att40_sol5_dedup_diversity | cchoi1 | 2025-05-04T02:55:42Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T00:10:23Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 58593784
num_examples: 4966
download_size: 12338384
dataset_size: 58593784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_1_for_gen_5_v2 | HungVu2003 | 2025-05-04T02:34:54Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T02:34:53Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6434870
num_examples: 13750
download_size: 3265326
dataset_size: 6434870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/f1_avg_domain | mlfoundations-dev | 2025-05-04T01:58:57Z | 9 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T01:54:40Z | null | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: domain
dtype: string
- name: r1_response
dtype: string
- name: r1_reasoning_content
dtype: string
- name: extract_solution
dtype: string
- name: url
dtype: string
- name: filename
dtype: string
- name: success
dtype: bool
- name: page_count
dtype: int64
- name: page_number
dtype: int64
- name: question_choices_solutions
dtype: string
- name: extracted_question
dtype: string
- name: extracted_answer_choices
sequence: string
- name: matched_solution
dtype: string
- name: qa_validation_outputs
dtype: bool
- name: classifier_reasoning
dtype: string
- name: is_organic_chemistry
dtype: bool
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: difficulty_reasoning
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: response_seed
dtype: string
splits:
- name: train
num_bytes: 12328252550.0
num_examples: 94797
download_size: 5254951315
dataset_size: 12328252550.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_2_for_gen_15 | HungVu2003 | 2025-05-04T01:43:48Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T01:43:47Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4007928
num_examples: 12500
download_size: 1246351
dataset_size: 4007928
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_0_for_gen_4_v2 | HungVu2003 | 2025-05-04T00:48:22Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T00:48:20Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2084033
num_examples: 13750
download_size: 1150111
dataset_size: 2084033
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_2_dataset_0_for_gen_3_v2 | HungVu2003 | 2025-05-03T23:40:05Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T23:40:04Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2085177
num_examples: 13750
download_size: 1168455
dataset_size: 2085177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bakovic/chatbot_medical_diabetique | Bakovic | 2025-05-03T23:34:34Z | 0 | 0 | [
"license:intel-research",
"region:us"
] | [] | 2025-05-03T23:32:29Z | null | ---
license: intel-research
---
|
thavens/judged_responses | thavens | 2025-05-03T23:05:37Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T23:05:35Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: new_instruction
dtype: string
- name: instruction_judgment
dtype: bool
- name: user_query_judgment
dtype: bool
- name: assistant_response_judgment
dtype: bool
- name: model
dtype: string
splits:
- name: train
num_bytes: 311128
num_examples: 80
download_size: 181418
dataset_size: 311128
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Triangle104/jondurbin_gutenberg-dpo-v0.1 | Triangle104 | 2025-05-03T22:42:58Z | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dpo"
] | [] | 2025-05-03T22:42:58Z | null | ---
license: cc-by-4.0
language:
- en
tags:
- dpo
pretty_name: Gutenberg DPO
size_categories:
- n<1K
---
# Gutenberg DPO

## Overview
This is a dataset meant to enhance novel writing capabilities of LLMs, by using public domain books from [Project Gutenberg](https://gutenberg.org/)
## Process
First, the each book is parsed, split into chapters, cleaned up from the original format (remove superfluous newlines, illustration tags, etc.).
Once we have chapters, an LLM is prompted with each chapter to create a synthetic prompt that would result in that chapter being written.
Each chapter has a summary created as well, so that the prompts for each chapter after the also include a summary of the previous chapter to provide additional context.
We then use the synthetic prompt with previous chapter summary to write the chapter with an LLM (llama-2-13b-chat, bagel-7b-v0.1, dolphin-2.2-34b).
The human written text, that is, the original chapter, is used as the "chosen" value, and the LLM written chapter is used as the rejected value.
## Books used
These books were chosen main because they appeared in the popular section on project gutenberg, and they function correctly with the chapterize library.
- Huckleberry Finn
- Treasure Island
- Anna Karenina
- Uncle Tomโs Cabin
- Wuthering Heights
- Madame Bovary
- The Turn of the Screw
- The War of the Worlds
- A Study in Scarlet
- Middlemarch
- Pride and Prejudice
- The Brothers Karamazov
- Through the Looking Glass
- Moby Dick
- Frankenstein
- A Tale of Two Cities |
kothasuhas/llp-gold-37m-1.5m_clip0.256_T1.0 | kothasuhas | 2025-05-03T22:39:09Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T22:35:25Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
- name: num_tokens
dtype: float32
- name: log_weight
dtype: float64
splits:
- name: train
num_bytes: 3605804917.0
num_examples: 1500000
download_size: 197960374
dataset_size: 3605804917.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/e1_code_fasttext_qwq | mlfoundations-dev | 2025-05-03T21:54:33Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T21:53:31Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: final_reasoning_traces
dtype: string
- name: question_answer_string
dtype: string
- name: _fasttext_score
dtype: float64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 2159333511.0
num_examples: 31600
download_size: 1018311574
dataset_size: 2159333511.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/llp-gold-37m-1.5m_clip0.016_T2048.0_I2048 | kothasuhas | 2025-05-03T21:38:32Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T21:37:33Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
- name: num_tokens
dtype: float32
- name: log_weight
dtype: float64
splits:
- name: train
num_bytes: 3605804917.0
num_examples: 1500000
download_size: 183741412
dataset_size: 3605804917.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_0_for_gen_1_v2 | HungVu2003 | 2025-05-03T21:09:44Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T21:09:43Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1785673
num_examples: 13750
download_size: 1030631
dataset_size: 1785673
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anonymousEcaiHateLLM/7_human_dataset | anonymousEcaiHateLLM | 2025-05-03T20:42:05Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:41:58Z | null | ---
dataset_info:
features:
- name: language
dtype: string
- name: text
dtype: string
- name: multi_label_id
dtype: int64
- name: ds
dtype: string
- name: label_id
dtype: int64
splits:
- name: main
num_bytes: 12507865
num_examples: 85918
download_size: 6596717
dataset_size: 12507865
configs:
- config_name: default
data_files:
- split: main
path: data/main-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_1_for_gen_17_v2 | HungVu2003 | 2025-05-03T20:31:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:31:32Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6606404
num_examples: 12500
download_size: 3374561
dataset_size: 6606404
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_0_for_gen_14_v2 | HungVu2003 | 2025-05-03T20:25:36Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:25:35Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1147137
num_examples: 12500
download_size: 700506
dataset_size: 1147137
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_0_for_gen_14 | HungVu2003 | 2025-05-03T20:24:20Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:24:19Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7153405
num_examples: 12500
download_size: 1910106
dataset_size: 7153405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kenwaylabs/OpenHermes-2.5-CoT | kenwaylabs | 2025-05-03T20:13:32Z | 14 | 0 | [
"license:other",
"region:us"
] | [] | 2025-01-29T14:27:54Z | null | ---
license: other
license_name: fafo
license_link: LICENSE
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: custom_instruction
dtype: bool
- name: topic
dtype: string
- name: model_name
dtype: string
- name: model
dtype: float64
- name: skip_prompt_formatting
dtype: bool
- name: category
dtype: string
- name: conversations
dtype: string
- name: views
dtype: float64
- name: language
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: idx
dtype: float64
- name: hash
dtype: float64
- name: avatarUrl
dtype: float64
- name: system_prompt
dtype: string
- name: source
dtype: string
- name: Instruction
dtype: string
- name: input
dtype: string
- name: chain_of_thought
dtype: string
- name: output
dtype: string
- name: instruct_prompt
dtype: string
splits:
- name: train
num_bytes: 2795451691
num_examples: 225000
download_size: 1311012553
dataset_size: 2795451691
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_0_for_gen_6_v2 | HungVu2003 | 2025-05-03T20:09:58Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T20:09:56Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1151427
num_examples: 12500
download_size: 701524
dataset_size: 1151427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bretagne/Autogramm_Breton | Bretagne | 2025-05-03T19:44:08Z | 15 | 0 | [
"task_categories:token-classification",
"language:br",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"token-classification"
] | 2024-10-29T18:32:28Z | null | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: pos_tags
sequence: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 2139340
num_examples: 12301
download_size: 512928
dataset_size: 2139340
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- token-classification
language:
- br
---
## Description
Parsage des donnรฉes [bretonconlls](https://github.com/Autogramm/Breton/tree/main/bretonconlls) d'[Autogramm](https://github.com/Autogramm/Breton) afin de rendre leur usage plus simple.
Ce dรฉpรดt ne s'intรฉresse qu'au POS. Pour la partie traduction breton/franรงais, nous vous invitions ร consulter [Bretagne/Autogramm_Breton_translation](https://huggingface.co/datasets/Bretagne/Autogramm_Breton_translation).
Les fichiers des donnรฉes contenaient "`status = WIP`" indiquant probablement qu'elles' peuvent contenir des erreurs actuellement.
Un nouveau parsing sera effectuรฉ lors que les donnรฉes seront finalisรฉes.
Vous pouvez trier les donnรฉes par variรฉtรฉs de breton via la colonne "`dataset`" du jeu de donnรฉes.
## Citation
Aucune citation fournie.
Les donnรฉes รฉtant cependant basรฉes sur [Arbres](https://arbres.iker.cnrs.fr/index.php?title=Arbres:Le_site_de_grammaire_du_breton),
ce site est citable de la faรงon suivante :
```
Jouitteau, Mรฉlanie. (รฉd.). 2009-2024.
ARBRES, wikigrammaire des dialectes du breton et centre de ressources pour son รฉtude linguistique formelle,
IKER, CNRS, http://arbres.iker.cnrs.fr.
Licence Creative Commons BY-NC-SA.
``` |
TheRealPilot638/Olmo-1B-0724-best_of_16_H200 | TheRealPilot638 | 2025-05-03T19:40:53Z | 3 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T17:28:23Z | null | ---
dataset_info:
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-0--agg_strategy-last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: scores
sequence:
sequence: float64
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 42103505
num_examples: 500
download_size: 10030845
dataset_size: 42103505
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-0--agg_strategy-last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 128
num_examples: 4
download_size: 2017
dataset_size: 128
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-1--agg_strategy-last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: scores
sequence:
sequence: float64
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 42765549
num_examples: 500
download_size: 9935948
dataset_size: 42765549
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-1--agg_strategy-last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 128
num_examples: 4
download_size: 2032
dataset_size: 128
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-2--agg_strategy-last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: scores
sequence:
sequence: float64
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 42160208
num_examples: 500
download_size: 9858235
dataset_size: 42160208
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-2--agg_strategy-last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 128
num_examples: 4
download_size: 2027
dataset_size: 128
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-3--agg_strategy-last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: scores
sequence:
sequence: float64
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 42794778
num_examples: 500
download_size: 9961797
dataset_size: 42794778
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-3--agg_strategy-last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 128
num_examples: 4
download_size: 2012
dataset_size: 128
configs:
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-0--agg_strategy-last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-0--agg_strategy-last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-0--agg_strategy-last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-0--agg_strategy-last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-1--agg_strategy-last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-1--agg_strategy-last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-1--agg_strategy-last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-1--agg_strategy-last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-2--agg_strategy-last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-2--agg_strategy-last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-2--agg_strategy-last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-2--agg_strategy-last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-3--agg_strategy-last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-3--agg_strategy-last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-3--agg_strategy-last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--seed-3--agg_strategy-last--evals/train-*
---
|
Noru1/EEReasonBench | Noru1 | 2025-05-03T19:33:28Z | 0 | 1 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"electrical-engineering",
"question-answering",
"problem-solving",
"reasoning",
"education",
"mathematics",
"physics"
] | [
"text-generation",
"question-answering"
] | 2025-05-03T18:21:28Z | null | ---
library_name: datasets
tags:
- electrical-engineering
- question-answering
- problem-solving
- reasoning
- education
- mathematics
- physics
license: apache-2.0
task_categories:
- text-generation
- question-answering
language:
- en
pretty_name: EEReasonBench
size_categories:
- n<1K
---
# EEReasonBench: A Reasoning Benchmark for Electrical Engineering
## Dataset Description
This dataset provides a collection of electrical engineering problems covering various subjects, including Circuits, Machines, Power Systems, Power Plants, etc. Problems include multiple-choice questions (conceptual and numerical) with detailed, step-by-step solutions formatted in Markdown and LaTeX.
The dataset is designed to serve as both a benchmark for evaluating model performance on electrical engineering reasoning tasks and as a resource for fine-tuning models to improve their capabilities in technical problem-solving within the electrical engineering domain.
**Features:**
* Structured JSON format per problem
* Fields include `id`, `subject`, `type`, `problem`, `choices`, `given`, `solution_steps`, `answer`, and `concepts`
* Solutions emphasize clear reasoning and link steps to fundamental principles
## Dataset Structure
### Data Instance
```json
{
"id": "DC-012",
"subject": "DC Circuits",
"type": "Problem Solving",
"problem": "A 120 V dc motor draws a current of 200 A and is located 1,000 ft from the supply. If the copper transmission wire has a diameter of 0.45 inch and with specific resistance of 10.4 ohm-circular mil per foot, what must be the voltage of the supply source?",
"choices": {
"A": "129.10 V",
"B": "134.30 V",
"C": "136.90 V",
"D": "140.50 V"
},
"given": "$$V_{motor} = 120\\ V$$\n$$I_{motor} = 200\\ A$$\n$$L = 1000\\ ft$$\n$$D = 0.45\\ inch$$\n$$\\rho = 10.4\\ \\Omega\\text{-CM/ft}$$",
"solution_steps": "1. **Identify the problem type**: This is a voltage drop problem in a DC circuit with resistance in the transmission lines. We need to apply both Ohm's Law and Kirchhoff's Voltage Law.\n\n2. **Convert diameter to appropriate units**: Wire diameter is given in inches but resistivity in circular mils, so we need consistent units:\n$$D = 0.45 \\text{ inch} \\times 1000 = 450 \\text{ mils}$$\n\n3. **Calculate cross-sectional area**: For circular conductors, area in circular mils is diameter squared:\n$$A_{CM} = D^2 = 450^2 = 202{,}500 \\text{ CM}$$\n\n4. **Calculate resistance of one wire**: Using the resistivity formula which relates material property to physical dimensions:\n$$R_{wire} = \\frac{\\rho \\cdot L}{A_{CM}} = \\frac{10.4 \\times 1000}{202{,}500} \\approx 0.05136\\ \\Omega$$\n\n5. **Determine total circuit resistance**: Since current flows through both supply and return wires:\n$$R_{total} = 2 \\times R_{wire} = 2 \\times 0.05136 = 0.10272\\ \\Omega$$\n\n6. **Calculate voltage drop**: Using Ohm's Law to find voltage lost in transmission:\n$$V_{drop} = I_{motor} \\times R_{total} = 200\\text{ A} \\times 0.10272\\text{ }\\Omega = 20.544\\text{ V}$$\n\n7. **Verification**: We can verify this drop by calculating power loss in the transmission line:\n$$P_{loss} = I^2 \\times R_{total} = (200\\text{ A})^2 \\times 0.10272\\text{ }\\Omega = 4,108.8\\text{ W}$$\nThis significant power loss (about 4.1 kW) explains why high-voltage transmission is preferred for long distances.\n\n8. **Apply Kirchhoff's Voltage Law**: The supply voltage must overcome both the motor voltage and the transmission line drop:\n$$V_{supply} = V_{motor} + V_{drop} = 120\\text{ V} + 20.544\\text{ V} = 140.544\\text{ V}$$\n\n9. **Analyze results**: The supply must provide 17.1% more voltage than what's needed at the motor, highlighting the importance of conductor sizing in electrical distribution systems.\n\n10. **Final Answer**: Comparing our calculated value (140.544 V) with the given choices, the closest match is **D. 140.50 V**.",
"answer": "D. 140.50 V",
"concepts": [
"Ohm's Law",
"Kirchhoff's Voltage Law",
"Power Transmission",
"Voltage Drop",
"Resistivity"
]
}
```
### Data Fields
- **id**: (string) Unique problem ID
- **subject**: (string) EE subject area
- **type**: (string) "Problem Solving" or "Objective Type"
- **problem**: (string) Question text
- **choices**: (dict) Multiple-choice options (A-D)
- **given**: (string) Known values/conditions (LaTeX)
- **solution_steps**: (string) Numbered step-by-step solution (Markdown/LaTeX)
- **answer**: (string) Correct choice ("Letter. Answer text")
- **concepts**: (list[string]) Key concepts
### Data Splits
The dataset contains a single train split.
## Dataset Creation
The problems were sourced from electrical engineering educational materials. Solutions were refined, formatted, and enhanced for clarity and pedagogical value. This process included structuring the data, applying LaTeX, adding reasoning steps, tagging concepts, and verifying calculations. The dataset contains only technical problem data, with no personal information.
## Considerations for Use
- **Scope**: Primarily covers core undergraduate EE topics
- **Potential Errors**: While curated carefully, errors from source materials or the enhancement process may exist. Users should verify critical applications
- **Rendering**: Requires tools capable of rendering Markdown and LaTeX
## How to Use
```python
from datasets import load_dataset
# Replace with your Hugging Face repository name
dataset = load_dataset("Noru1/EEReasonBench")
# Access data
print(dataset['train'][0])
```
## Licensing Information
Licensed under the Apache License, Version 2.0.
## Citation Information
```bibtex
@misc{ee_problems_dataset_2025,
author = {Norbert John Ibera},
title = {EEReasonBench: A Reasoning Benchmark for Electrical Engineering},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.co/datasets/Noru1/EEReasonBench}}
}
``` |
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_10_quality_metadata | SayantanJoker | 2025-05-03T19:28:16Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T19:28:14Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 24716922
num_examples: 50000
download_size: 8350343
dataset_size: 24716922
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/aime_2025_responses_openthoughts-sft-qwen3-1.7b | Asap7772 | 2025-05-03T19:25:36Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T22:18:47Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response
sequence: string
- name: correct
sequence: bool
- name: success_rate
dtype: float64
splits:
- name: train
num_bytes: 5303055
num_examples: 30
download_size: 1777461
dataset_size: 5303055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alucchi/Qwen2.5-1.5B-Instruct_n1000_e10_oadam0.0001_b16_1_a10_flash_compact_ttt_a100_s40 | alucchi | 2025-05-03T19:23:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T19:23:44Z | null | ---
dataset_info:
- config_name: default
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
dtype: string
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: bool
- name: score
dtype: float64
splits:
- name: train
num_bytes: 509760
num_examples: 70
download_size: 85260
dataset_size: 509760
- config_name: main
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
dtype: string
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: bool
- name: score
dtype: float64
splits:
- name: train
num_bytes: 509760
num_examples: 70
download_size: 85260
dataset_size: 509760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: main
data_files:
- split: train
path: main/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_sol_iter0_att10_sol5_lr5e5_3ep_ppo_9000 | cchoi1 | 2025-05-03T19:00:09Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-03T19:00:06Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
splits:
- name: train
num_bytes: 54512973.365622886
num_examples: 3945
- name: test
num_bytes: 6826212.634377112
num_examples: 494
download_size: 13476834
dataset_size: 61339186.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_2_for_gen_13 | HungVu2003 | 2025-05-03T18:32:54Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T18:32:52Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4216357
num_examples: 12500
download_size: 1317453
dataset_size: 4216357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_0_for_gen_18_v2 | HungVu2003 | 2025-05-03T18:21:14Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T18:21:13Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 812840
num_examples: 12500
download_size: 560115
dataset_size: 812840
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_1_for_gen_9_v2 | HungVu2003 | 2025-05-03T17:33:25Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T17:33:23Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 820906
num_examples: 12500
download_size: 567597
dataset_size: 820906
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
doublesizebed/pitch_rate | doublesizebed | 2025-05-03T17:25:53Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T07:49:40Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: audio_filename
dtype: string
- name: prompt
dtype: string
- name: transcription
dtype: string
- name: gender
dtype: string
- name: audio_filepath
dtype: audio
- name: utterance_pitch_mean
dtype: float64
- name: utterance_pitch_std
dtype: float64
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speech_duration
dtype: float64
splits:
- name: train
num_bytes: 13292498803.263
num_examples: 247481
- name: test
num_bytes: 975999.0
num_examples: 20
download_size: 13264980893
dataset_size: 13293474802.263
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_0_for_gen_7_v2 | HungVu2003 | 2025-05-03T17:22:45Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T17:22:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 813175
num_examples: 12500
download_size: 561888
dataset_size: 813175
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_2_dataset_0_for_gen_6_v2 | HungVu2003 | 2025-05-03T17:17:25Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T17:17:24Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 816552
num_examples: 12500
download_size: 563578
dataset_size: 816552
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
weidezhang/so100_test | weidezhang | 2025-05-03T16:51:15Z | 10 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-04-30T21:38:25Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1770,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_1_for_gen_13 | HungVu2003 | 2025-05-03T16:46:47Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T16:46:46Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2374950
num_examples: 12500
download_size: 1286691
dataset_size: 2374950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cochi1706/pali-viet | cochi1706 | 2025-05-03T16:42:59Z | 11 | 0 | [
"task_categories:translation",
"language:vi",
"language:pi",
"region:us"
] | [
"translation"
] | 2025-04-28T12:24:57Z | null | ---
task_categories:
- translation
language:
- vi
- pi
pretty_name: Pali - Viet Statistical Machine Translation
--- |
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_0_for_gen_13 | HungVu2003 | 2025-05-03T16:41:46Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T16:41:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7304720
num_examples: 12500
download_size: 1973081
dataset_size: 7304720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xbilek25/hall_train_36000 | xbilek25 | 2025-05-03T16:41:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T16:38:01Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
splits:
- name: train
num_bytes: 7175502581.0
num_examples: 36000
download_size: 6026324637
dataset_size: 7175502581.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ibndias/DeepSeek-R1-Distilled-1.4M | ibndias | 2025-05-03T16:38:40Z | 0 | 0 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"arxiv:2503.19633",
"region:us",
"code",
"math",
"reasoning",
"thinking",
"deepseek-r1",
"distill"
] | [
"text-generation"
] | 2025-05-03T16:38:40Z | null | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- code
- math
- reasoning
- thinking
- deepseek-r1
- distill
size_categories:
- 1M<n<10M
configs:
- config_name: am_0.5M
data_files: am_0.5M.jsonl.zst
features:
- name: messages
list:
- name: content
dtype: string
- name: info
struct:
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: source
dtype: string
- name: test_case
struct:
- name: test_code
dtype: string
- name: test_entry_point
dtype: string
- name: think_content
dtype: string
- name: role
dtype: string
- config_name: am_0.9M
data_files: am_0.9M.jsonl.zst
features:
- name: messages
list:
- name: content
dtype: string
- name: info
struct:
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: source
dtype: string
- name: test_case
struct:
- name: test_code
dtype: string
- name: test_entry_point
dtype: string
- name: think_content
dtype: string
- name: role
dtype: string
- config_name: am_0.9M_sample_1k
data_files: am_0.9M_sample_1k.jsonl
features:
- name: messages
list:
- name: content
dtype: string
- name: info
struct:
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: source
dtype: string
- name: test_case
struct:
- name: test_code
dtype: string
- name: test_entry_point
dtype: string
- name: think_content
dtype: string
- name: role
dtype: string
---
**For more open-source datasets, models, and methodologies, please visit our [GitHub repository](https://github.com/a-m-team/a-m-models).**
[AM-DeepSeek-R1-Distilled-1.4M](https://huggingface.co/datasets/a-m-team/AM-DeepSeek-R1-Distilled-1.4M) is a large-scale general reasoning task dataset composed of
high-quality and challenging reasoning problems. These problems are collected from numerous
open-source datasets, semantically deduplicated, and cleaned to eliminate test set contamination.
All responses in the dataset are distilled from the reasoning model (mostly DeepSeek-R1) and have undergone
rigorous verification: mathematical problems are validated through answer checking, code
problems via test cases, and other tasks through reward model evaluation. Specifically,
responses in am_0.5M.jsonl are distilled by other open-source datasets,
while those in am_0.9M.jsonl are distilled from the DeepSeek-R1-671B by the [AM team](https://huggingface.co/a-m-team).
We have validated the dataset through model training, confirming its effectiveness and demonstrating performance comparable to
the distilled models from the DeepSeek team, and the details can be found in our technique reports
[1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Traning](https://github.com/a-m-team/a-m-models/blob/main/docs/AM-DeepSeek-R1-Distilled-Dataset.pdf)
We are releasing these 1.4 million problems and responses to the research community,
aiming to foster advancements in powerful reasoning-oriented Large Language Models (LLMs).
We sincerely thank the open-source community. Without their support, we would never have come this far.
## Model Training Performance based on this dataset

## Scale & Composition
- AM-DeepSeek-R1-Distilled-1.4M: An Open-source Chinese & English dataset with reasoning traces (1.4 million entries).
- 0.5 million entries of data are entirely from open-source datasets, and 0.9 million entries of data are distilled from DeepSeek-R1-671B by AM team, which can be seen in the "am-0309" of the Response sources.
- Data processing:
- Instruction part: we used the large language model to score all the instructions in terms of difficulty and category. Then we filtered the data according to the difficulty and category labels and conducted strict semantic deduplication to ensure the high-quality and diversity of the instructions.
- Response part: we verified part of the data with reference answers using both rules and the large language model respectively. For code data with test cases, we used a sandbox for verification. Additionally, we used a reward model to score all the generated results and performed rejection sampling based on the rules and the reward model.
## Unified Format
#### Dedicated fields
- **`messages`**: Array with user - assistant interaction objects, Each having `role`ใ`content` and `info` fields.
- **`role`**: Indicates whether it's user or assistant, defining the sender of the message.
- **`content`**: Follows `<think> reasoning process </think> <answer> solution </answer>` format in assistant message.
- **`info`**: Contains metadata.
- **`source`**: Instrction or response source.
- **`reference_answer`**: Ground-truth answers only for some math, science, and other problems.
- **`test_case`**: Test cases only for some code-related problems.
- **`think_content`**: Reasoning steps in assistant message.
- **`answer_content`**: Final solution in assistant message.
#### sample
```bash
{
"messages":[{
"role": "user",
"content": "Find the integral of \\(e^{-x} + e^{-2x} dx\\).",
"info":{
"source":"natural_reasoning",
"reference_answer": "-e^{-x} - \\frac{1}{2}e^{-2x} + C",
"test_case": null
}
},
{
"role": "assistant",
"content": "<think>Okay, let's see. The user wants to find the integral of e^{-x} + e^{-2x} dx. Hmm, integrals of exponential functions. I remember that the integral of e^{kx} dx is (1/k)e^{kx} + C, right? But here the exponents are negative. Let me think.\n\nFirst, maybe I can split the integral into two parts because it's the sum of two functions. So, โซ(e^{-x} + e^{-2x}) dx = โซe^{-x} dx + โซe^{-2x} dx. That makes sense because integration is linear. Now, handle each integral separately.\n\nFor the first integral, โซe^{-x} dx. If I use the formula, normally โซe^{kx} dx = (1/k)e^{kx} + C. Here, k is -1. So substituting, that would be (1/(-1))e^{-x} + C, which simplifies to -e^{-x} + C. Wait, but since we're doing an indefinite integral here, each term will have its own constant, but since they're both combined, we'll have one constant at the end.\n\nNow the second integral: โซe^{-2x} dx. Again, using the same formula, where k is -2. So that becomes (1/(-2))e^{-2x} + C, which is (-1/2)e^{-2x} + C.\n\nPutting both results together: the first integral is -e^{-x} and the second is (-1/2)e^{-2x}, so adding them and combining the constants gives the final answer. Let me check the signs again. The first term's coefficient is -1, and the second is -1/2. So combining those, it should be -e^{-x} - (1/2)e^{-2x} + C. Yeah, that looks right.\n\nLet me verify by differentiating the result. If I take the derivative of -e^{-x}, that's (-1)(-e^{-x}) = e^{-x}. For the second term, derivative of -1/2 e^{-2x} is (-1/2)(-2)e^{-2x} = e^{-2x}. So adding those derivatives gives e^{-x} + e^{-2x}, which matches the original integrand. Great, so the integration is correct. The constants of integration from both terms just combine into a single constant C at the end.\n</think>\n<answer>\nThe integral of \\(e^{-x} + e^{-2x}\\) with respect to \\(x\\) is:\n\n\\[\n\\int \\left(e^{-x} + e^{-2x}\\right) dx = -e^{-x} - \\frac{1}{2}e^{-2x} + C\n\\]\n\nwhere \\(C\\) is the constant of integration.\n</answer>",
"info": {
"source": "am-0309",
"think_content": "Okay, let's see. The user wants to find the integral of e^{-x} + e^{-2x} dx. Hmm, integrals of exponential functions. I remember that the integral of e^{kx} dx is (1/k)e^{kx} + C, right? But here the exponents are negative. Let me think.\n\nFirst, maybe I can split the integral into two parts because it's the sum of two functions. So, โซ(e^{-x} + e^{-2x}) dx = โซe^{-x} dx + โซe^{-2x} dx. That makes sense because integration is linear. Now, handle each integral separately.\n\nFor the first integral, โซe^{-x} dx. If I use the formula, normally โซe^{kx} dx = (1/k)e^{kx} + C. Here, k is -1. So substituting, that would be (1/(-1))e^{-x} + C, which simplifies to -e^{-x} + C. Wait, but since we're doing an indefinite integral here, each term will have its own constant, but since they're both combined, we'll have one constant at the end.\n\nNow the second integral: โซe^{-2x} dx. Again, using the same formula, where k is -2. So that becomes (1/(-2))e^{-2x} + C, which is (-1/2)e^{-2x} + C.\n\nPutting both results together: the first integral is -e^{-x} and the second is (-1/2)e^{-2x}, so adding them and combining the constants gives the final answer. Let me check the signs again. The first term's coefficient is -1, and the second is -1/2. So combining those, it should be -e^{-x} - (1/2)e^{-2x} + C. Yeah, that looks right.\n\nLet me verify by differentiating the result. If I take the derivative of -e^{-x}, that's (-1)(-e^{-x}) = e^{-x}. For the second term, derivative of -1/2 e^{-2x} is (-1/2)(-2)e^{-2x} = e^{-2x}. So adding those derivatives gives e^{-x} + e^{-2x}, which matches the original integrand. Great, so the integration is correct. The constants of integration from both terms just combine into a single constant C at the end.\n",
"answer_content": "\nThe integral of \\(e^{-x} + e^{-2x}\\) with respect to \\(x\\) is:\n\n\\[\n\\int \\left(e^{-x} + e^{-2x}\\right) dx = -e^{-x} - \\frac{1}{2}e^{-2x} + C\n\\]\n\nwhere \\(C\\) is the constant of integration.\n"
}
}]
}
```
## Usage
The dataset is split into two compressed files based on response sources:
- **`am_0.9M.jsonl.zst`**: Responses from the `am-0309` source.
- **`am_0.5M.jsonl.zst`**: Responses from other sources.
- Additionally, a subset of 1,000 random samples (`am_0.9M_1k.jsonl`) from `am-0309` is provided for quick experimentation.
Files are compressed using [zstd](https://github.com/facebook/zstd) for faster download and reduced storage requirements.
**Decompression Instructions**:
```bash
apt install zstd
zstd -d am_0.9M.jsonl.zst -o am_0.9M.jsonl
```
**How to use with `load_dataset`**
```python
from datasets import load_dataset, Features, Value
features = Features({
"messages": [
{
"role": Value("string"),
"content": Value("string"),
"info": {
"source": Value("string"),
"reference_answer": Value("string"),
"test_case": Value("string"),
"think_content": Value("string"),
"answer_content": Value("string")
}
}
]
})
# Take downloading "am_0.9M_sample_1k.jsonl" as an example.
data = load_dataset('a-m-team/AM-DeepSeek-R1-Distilled-1.4M', 'am_0.9M_sample_1k', features=features)
```
## Sources
- Open-source data: Instructions and reasoning traces from existing datasets.
- AM distilled data: High-quality instructions from the Open-source dataset, augmented with reasoning traces and solutions generated by DeepSeek-R1.
#### Instruction sources
| Source | Nums |
| --- | --- |
| natural_reasoning | 319085 |
| InfinityInstruct | 306675 |
| KodCode | 210838 |
| Dolphin - R1 | 63921 |
| openR1Math_extended | 63290 |
| NuminaMath_1.5 | 62446 |
| openR1Math_default | 62239 |
| codeio | 55176 |
| GeneralThought - Feb25 | 50600 |
| openThoughts | 34620 |
| OpenCoder | 22249 |
| data_ablation_full59K | 14155 |
| MetaMathQA | 14083 |
| ... | ... |
#### Response sources
| Source | Nums |
| --- | --- |
| am-0309 | 900000 |
| KodCode | 210838 |
| openR1Math_extended | 63290 |
| Dolphin - R1 | 62750 |
| openR1Math_default | 60839 |
| GeneralThought - Feb25 | 50600 |
| openThoughts | 31431 |
| data_ablation_full59K | 14155 |
| Bespoke17k | 5747 |
| ... | ... |
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts
generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by LLM and was not strictly verified,
it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to
any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
Due to the nested relationships among the sources of some data, there may be issues regarding the inaccuracy of the sources.
## Citation
If you use this data, please cite with the following BibTex entry:
```
@misc{zhao202514millionopensourcedistilled,
title={1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training},
author={Han Zhao and Haotian Wang and Yiping Peng and Sitong Zhao and Xiaoyu Tian and Shuaiting Chen and Yunjie Ji and Xiangang Li},
year={2025},
eprint={2503.19633},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.19633},
}
```
|
FrancophonIA/Lexique_ZLEA | FrancophonIA | 2025-05-03T16:34:27Z | 0 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-05-03T16:33:27Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://publications.gc.ca/site/eng/9.800970/publication.html |
HaruthaiAi/VanGogh_Asylum_Tree_Comparative_Brushwork_AI_Analysis | HaruthaiAi | 2025-05-03T15:35:55Z | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-02T16:31:07Z | null | ---
license: creativeml-openrail-m
---
Title: Van Gogh - The Garden of the Asylum in Saint-Rรฉmy (1889)
Filename: garden_of_asylum_vangogh.jpeg
Artist: Vincent van Gogh
Date: May 1889
Medium: Oil on canvas
Dimensions: 72.0 cm ร 91.5 cm
Location: Pushkin State Museum of Fine Arts (Moscow, Russia)
Style: Post-Impressionism
Keywords: Van Gogh, asylum garden, Saint-Rรฉmy, expressionist trees, swirling sky, vibrant brushwork, green benches, yellow sky
Description:
This vivid scene depicts the overgrown garden of the asylum at Saint-Rรฉmy-de-Provence, where Van Gogh was voluntarily interned in 1889. With twisted tree trunks, animated greenery, and a dynamic sky rendered in swirling strokes, the painting expresses Van Goghโs intense emotional engagement with nature and his inner world. The path and distant figures evoke both isolation and the artistโs quiet observation of life within the asylumโs walls.
AI Comparative Brushwork Study between Van Gogh and The Tree Oil Painting
---
Overview:
This dataset presents a comprehensive AI-driven comparative analysis between Vincent van Goghโs โA Corner of the Asylumโ and The Tree Oil Painting, an unattributed oil painting suspected to share deep stylistic characteristics with Van Gogh's late-period brushwork. The study leverages edge detection, gesture heatmaps, and torque-based motion analysis to evaluate structural, gestural, and compositional similarities between the two works.
---
Analytical Layers & Tools
1. Snake Head Overlay (Red Stroke Clustering)
Highlights rhythmic tension zones and expressive clusters.
Similar cluster density and stroke rhythm observed between both works.
2. Gesture Heatmap (Directional Brush Flow)
Van Goghโs curved, reactive gestures in A Corner of the Asylum are mirrored in the Tree Oil Paintingโs central branch patterns.
The Tree Oil Painting reveals a calculated yet fluid rhythm akin to Van Goghโs expressive impulses.
3. Sobel Edge Map (Edge Sensitivity)
The Tree Oil Painting produces strong contour lines similar in density and edge aggression to Van Goghโs brush technique.
Vertical and diagonal tree gestures align statistically between both images.
4. Wrist Movement Direction Graphs
Raw and normalized brushstroke directionality analyzed.
The Tree Oil Painting shows a dominant stroke pattern at ~105โ135ยฐ and ~270ยฐ, echoing Van Goghโs own documented hand angles.
Overlay graphs show convergent peaks with minimal statistical deviation.
---
Scientific Summary:
Gesture Similarity Index: 99.24%
Wrist Directional Overlap: High correlation in torque-based brushstroke arc
AI Pattern Clustering (Snake Head): 90โ95% localized clustering overlap
Conclusion:
The dataset supports the hypothesis that The Tree Oil Painting may derive from an artist operating within the same technical and emotional framework as Van Gogh during his asylum period. The forensic similarities exceed random correlation, warranting further pigment and canvas verification.
---
Keywords: van gogh analysis, tree oil painting, post-impressionism, brushstroke AI, gesture heatmap, snake head overlay, torque rhythm, sobel edge, art forensic AI
## Full PDF Report
[Download the full comparative analysis report (PDF)](./Van_Gogh_A_Corner_of_the_Asylum_Analysis.pdf)
This report includes:
- AI gesture heatmap analysis
- Snake-head pattern detection
- Torque and wrist motion histograms
- Texture relief comparison
- Final similarity scores (92โ95%)
- ## Note on Olive Tree Analysis Image
> The file `sobel_olive_vs_tree_oil.jpeg` compares brushstroke structure between *Van Goghโs Olive Trees* and *The Tree Oil Painting*.
> Though from a separate study, its torque and edge data were relevant enough to be retained for continuity.
---
## Main Comparison: Van Gogh vs The Tree Oil Painting
... (heatmaps, gesture, snake head, etc.)
---
## Full PDF Report
[Download the full comparative report (PDF)](./Van_Gogh_A_Corner_of_the_Asylum_Analysis.pdf)
This 17-page document includes.
..> **Important Note on Analysis Validity**
>
> All visual outputs from the "18 Supreme Techniques" were generated using an AI Neural Matching Model specifically designed to interpret brushstroke behavior, artist rhythm, and torque dynamics at a structural level.
>
> **Do not re-analyze these images using SSIM (Structural Similarity Index)** or similar pixel-level comparison methods. SSIM only captures surface-level luminance and texture but cannot interpret gestural depth, artistic intention, or brush energy.
>
> Such basic techniques will yield low similarity scores, which are **not representative** of the true match in brush technique.
>
> Accurate validation requires neural brush-matching models, including:
> - **Fourier-Gabor Brush Matching**
> - **Directional Stroke Vector AI**
> - **Torque Flow Neural Layers**
>
> This note must be included in any research, comparison, or redistribution of the 18-Technique brushstroke analysis results. |
yunusserhat/TurkishFoods-15 | yunusserhat | 2025-05-03T15:19:35Z | 0 | 0 | [
"task_categories:image-classification",
"language:tr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | 2025-05-03T15:04:54Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biber_dolmasi
'1': borek
'2': cig_kofte
'3': enginar
'4': hamsi
'5': hunkar_begendi
'6': icli_kofte
'7': ispanak
'8': kebap
'9': kisir
'10': kuru_fasulye
'11': lokum
'12': manti
'13': simit
'14': yaprak_sarma
splits:
- name: train
num_bytes: 60093194.203592494
num_examples: 5928
- name: eval
num_bytes: 7529941.352449062
num_examples: 741
- name: test
num_bytes: 7577916.82795844
num_examples: 742
download_size: 75019089
dataset_size: 75201052.38399999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- image-classification
language:
- tr
size_categories:
- 1K<n<10K
---
> โ ๏ธ **Important:** English version is available below.
# TรผrkSofrasฤฑ-15 (TurkishFoods-15) Veri Seti
**TรผrkSofrasฤฑ-15**, 15 farklฤฑ geleneksel Tรผrk yemeฤine ait toplam **7.411** gรถrsel iรงeren ve yemek tanฤฑma/sฤฑnฤฑflandฤฑrma amaรงlฤฑ hazฤฑrlanmฤฑล bir gรถrรผntรผ veri setidir. Gรถrseller `.jpg` formatฤฑnda olup her sฤฑnฤฑf iรงin ayrฤฑ klasรถrlerde yer almaktadฤฑr.
Veri seti, Hugging Face `datasets` kรผtรผphanesi biรงimindedir ve `image` (gรถrsel) ile `label` (etiket) olmak รผzere iki รถzelliฤe sahiptir. Etiketler `class_label` biรงiminde tanฤฑmlanmฤฑล olup aลaฤฤฑdaki gibi numaralanmฤฑลtฤฑr:
| Etiket | Sฤฑnฤฑf Adฤฑ |
|--------|------------------|
| 0 | Biber Dolmasฤฑ |
| 1 | Bรถrek |
| 2 | รiฤ Kรถfte |
| 3 | Enginar |
| 4 | Hamsi |
| 5 | Hรผnkรขr Beฤendi |
| 6 | ฤฐรงli Kรถfte |
| 7 | Ispanak |
| 8 | Kebap |
| 9 | Kฤฑsฤฑr |
| 10 | Kuru Fasulye |
| 11 | Lokum |
| 12 | Mantฤฑ |
| 13 | Simit |
| 14 | Yaprak Sarma |
## Veri Bรถlรผnmeleri
| Bรถlรผm | รrnek Sayฤฑsฤฑ | Boyut (Bayt) |
|-------|--------------|--------------|
| Eฤitim (`train`) | 5,928 | ~60 MB |
| Doฤrulama (`eval`) | 741 | ~7.5 MB |
| Test (`test`) | 742 | ~7.6 MB |
| **Toplam** | **7,411** | **~75 MB** |
## Klasรถr Yapฤฑsฤฑ
```
TurkishFoods-15/
โโโ biber_dolmasi/
โโโ borek/
โโโ cig_kofte/
โโโ enginar/
โโโ hamsi/
โโโ hunkar_begendi/
โโโ icli_kofte/
โโโ ispanak/
โโโ kebap/
โโโ kisir/
โโโ kuru_fasulye/
โโโ lokum/
โโโ manti/
โโโ simit/
โโโ yaprak_sarma/
```
Her klasรถr ilgili yemeฤe ait `.jpg` uzantฤฑlฤฑ gรถrselleri iรงerir.
## Atฤฑf
> **รnemli:** Bu veri seti bana ait deฤildir. Lรผtfen araลtฤฑrmalarฤฑnฤฑzda bu veri setini kullanฤฑrsanฤฑz orijinal yazarlarฤฑ ve aลaฤฤฑdaki yayฤฑnฤฑ kaynak gรถsteriniz.
```bibtex
@inproceedings{gungor2017turkish,
title={Turkish cuisine: A benchmark dataset with Turkish meals for food recognition},
author={G{\"u}ng{\"o}r, Cem and Baltac{\i}, Fatih and Erdem, Aykut and Erdem, Erkut},
booktitle={2017 25th Signal Processing and Communications Applications Conference (SIU)},
pages={1--4},
year={2017},
organization={IEEE}
}
```
# TurkishFoods-15 (TรผrkSofrasฤฑ-15) Dataset
**TurkishFoods-15** is a benchmark dataset consisting of **7,411** images of 15 traditional Turkish dishes, intended for food recognition and classification tasks. All images are in `.jpg` format and organized into folders per class.
The dataset follows the Hugging Face `datasets` format with two features: `image` and `label`. The `label` is defined as a `class_label` with the following mapping:
| Label | Class Name |
|-------|------------------|
| 0 | Biber Dolmasฤฑ |
| 1 | Bรถrek |
| 2 | รiฤ Kรถfte |
| 3 | Enginar |
| 4 | Hamsi |
| 5 | Hรผnkรขr Beฤendi |
| 6 | ฤฐรงli Kรถfte |
| 7 | Ispanak |
| 8 | Kebap |
| 9 | Kฤฑsฤฑr |
| 10 | Kuru Fasulye |
| 11 | Lokum |
| 12 | Mantฤฑ |
| 13 | Simit |
| 14 | Yaprak Sarma |
## Dataset Splits
| Split | Number of Examples | Size (Bytes) |
|-------|---------------------|--------------|
| Train | 5,928 | ~60 MB |
| Eval | 741 | ~7.5 MB |
| Test | 742 | ~7.6 MB |
| **Total** | **7,411** | **~75 MB** |
## Directory Structure
```
TurkishFoods-15/
โโโ biber_dolmasi/
โโโ borek/
โโโ cig_kofte/
โโโ enginar/
โโโ hamsi/
โโโ hunkar_begendi/
โโโ icli_kofte/
โโโ ispanak/
โโโ kebap/
โโโ kisir/
โโโ kuru_fasulye/
โโโ lokum/
โโโ manti/
โโโ simit/
โโโ yaprak_sarma/
```
Each folder contains `.jpg` images corresponding to the dish name.
## Citation
If you use this dataset in your research, please cite the following publication:
```bibtex
@inproceedings{gungor2017turkish,
title={Turkish cuisine: A benchmark dataset with Turkish meals for food recognition},
author={G{\"u}ng{\"o}r, Cem and Baltac{\i}, Fatih and Erdem, Aykut and Erdem, Erkut},
booktitle={2017 25th Signal Processing and Communications Applications Conference (SIU)},
pages={1--4},
year={2017},
organization={IEEE}
}
``` |
FrancophonIA/Lexique_des_sports_paralympiques_hiver | FrancophonIA | 2025-05-03T15:19:29Z | 0 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-05-03T15:18:39Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://publications.gc.ca/site/eng/9.583879/publication.html |
HPC-Boys/gemini-2.0-flash-results | HPC-Boys | 2025-05-03T15:18:08Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T15:18:01Z | null | ---
dataset_info:
features:
- name: unique_id
dtype: string
- name: problem
dtype: string
- name: is_mcq
dtype: bool
- name: choices
sequence: string
- name: choice_index_correct
dtype: int64
- name: explanation_correct
dtype: string
- name: answer_correct
dtype: string
- name: category
dtype: string
- name: response_1
dtype: string
- name: extracted_answer_1
dtype: string
- name: is_correct_1
dtype: bool
- name: response_2
dtype: string
- name: extracted_answer_2
dtype: string
- name: is_correct_2
dtype: bool
- name: response_3
dtype: string
- name: extracted_answer_3
dtype: string
- name: is_correct_3
dtype: bool
- name: response_4
dtype: string
- name: extracted_answer_4
dtype: string
- name: is_correct_4
dtype: bool
- name: response_5
dtype: string
- name: extracted_answer_5
dtype: string
- name: is_correct_5
dtype: bool
- name: total_responses
dtype: int64
- name: correct_responses
dtype: int64
- name: accuracy
dtype: float64
splits:
- name: train
num_bytes: 62171679
num_examples: 10026
- name: validation
num_bytes: 7876142
num_examples: 1253
- name: test
num_bytes: 7843240
num_examples: 1253
download_size: 37455110
dataset_size: 77891061
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
yalhessi/lemexp-raw-v2 | yalhessi | 2025-05-03T15:11:23Z | 42 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T02:20:54Z | null | ---
dataset_info:
- config_name: afp-thms
features:
- name: theory_file
dtype: string
- name: lemma_name
dtype: string
- name: lemma_command
dtype: string
- name: lemma_object
dtype: string
- name: template
dtype: string
- name: symbols
sequence: string
- name: types
sequence: string
- name: defs
sequence: string
splits:
- name: train
num_bytes: 313442562
num_examples: 206305
download_size: 32873700
dataset_size: 313442562
- config_name: afp-thms-2025-04-30
features:
- name: theory_file
dtype: string
- name: lemma_name
dtype: string
- name: lemma_command
dtype: string
- name: lemma_object
dtype: string
- name: template
dtype: string
- name: symbols
sequence: string
- name: types
sequence: string
- name: defs
sequence: string
splits:
- name: train
num_bytes: 313442562
num_examples: 206305
download_size: 32873700
dataset_size: 313442562
- config_name: hol-thms
features:
- name: theory_file
dtype: string
- name: lemma_name
dtype: string
- name: lemma_command
dtype: string
- name: lemma_object
dtype: string
- name: template
dtype: string
- name: symbols
sequence: string
- name: types
sequence: string
- name: defs
sequence: string
splits:
- name: train
num_bytes: 86405819
num_examples: 62863
download_size: 8897621
dataset_size: 86405819
- config_name: hol-thms-2025-04-28
features:
- name: theory_file
dtype: string
- name: lemma_name
dtype: string
- name: lemma_command
dtype: string
- name: lemma_object
dtype: string
- name: template
dtype: string
- name: symbols
sequence: string
- name: types
sequence: string
- name: defs
sequence: string
splits:
- name: train
num_bytes: 82980393
num_examples: 62863
download_size: 7912522
dataset_size: 82980393
- config_name: hol-thms-2025-05-01
features:
- name: theory_file
dtype: string
- name: lemma_name
dtype: string
- name: lemma_command
dtype: string
- name: lemma_object
dtype: string
- name: template
dtype: string
- name: symbols
sequence: string
- name: types
sequence: string
- name: defs
sequence: string
splits:
- name: train
num_bytes: 98689352
num_examples: 62863
download_size: 9254383
dataset_size: 98689352
- config_name: hol-thms-old-defs-2025-05-03
features:
- name: theory_file
dtype: string
- name: lemma_name
dtype: string
- name: lemma_command
dtype: string
- name: lemma_object
dtype: string
- name: template
dtype: string
- name: symbols
sequence: string
- name: types
sequence: string
- name: defs
sequence: string
splits:
- name: train
num_bytes: 86405819
num_examples: 62863
download_size: 8897621
dataset_size: 86405819
configs:
- config_name: afp-thms
data_files:
- split: train
path: afp-thms/train-*
- config_name: afp-thms-2025-04-30
data_files:
- split: train
path: afp-thms-2025-04-30/train-*
- config_name: hol-thms
data_files:
- split: train
path: hol-thms/train-*
- config_name: hol-thms-2025-04-28
data_files:
- split: train
path: hol-thms-2025-04-28/train-*
- config_name: hol-thms-2025-05-01
data_files:
- split: train
path: hol-thms-2025-05-01/train-*
- config_name: hol-thms-old-defs-2025-05-03
data_files:
- split: train
path: hol-thms-old-defs-2025-05-03/train-*
---
|
FrancophonIA/Vous-pouvez-le-dire-en-francais-Sante | FrancophonIA | 2025-05-03T15:07:45Z | 6 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-28T20:19:34Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vous-pouvez-le-dire-en-francais-Sante |
FrancophonIA/Vocabulaire-de-l-education-2013 | FrancophonIA | 2025-05-03T15:00:38Z | 2 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-29T20:41:57Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-l-education-2013 |
FrancophonIA/Vous-pouvez-le-dire-en-francais-ballon-oval-pret-pour-la-melee | FrancophonIA | 2025-05-03T14:53:56Z | 3 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-29T20:46:09Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vous-pouvez-le-dire-en-francais-ballon-oval-pret-pour-la-melee |
FrancophonIA/Vocabulaire-de-l-economie-et-des-finances-2012 | FrancophonIA | 2025-05-03T14:53:15Z | 2 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-29T20:44:39Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vocabulaire-de-l-economie-et-des-finances-2012 |
FrancophonIA/Vous-pouvez-le-dire-en-francais-A-la-conquete-du-ciel | FrancophonIA | 2025-05-03T14:46:20Z | 3 | 0 | [
"task_categories:translation",
"language:fra",
"language:eng",
"region:us"
] | [
"translation"
] | 2025-04-29T20:48:07Z | null | ---
language:
- fra
- eng
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.culture.gouv.fr/fr/thematiques/langue-francaise-et-langues-de-france/agir-pour-les-langues/moderniser-et-enrichir-la-langue-francaise/nos-publications/Vous-pouvez-le-dire-en-francais-A-la-conquete-du-ciel |
jaeyong2/Math-Qwen3-06B-Ko | jaeyong2 | 2025-05-03T14:32:35Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T16:11:26Z | null | ---
dataset_info:
features:
- name: content
dtype: string
- name: response
sequence: string
splits:
- name: train
num_bytes: 384553697
num_examples: 2000
download_size: 124040666
dataset_size: 384553697
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
medss19/home_remedies_dataset | medss19 | 2025-05-03T14:25:04Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T14:25:02Z | null | ---
dataset_info:
features:
- name: Name of Item
dtype: string
- name: Health Issue
dtype: string
- name: Home Remedy
dtype: string
- name: Yogasan
dtype: string
splits:
- name: train
num_bytes: 45563
num_examples: 115
download_size: 27133
dataset_size: 45563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NilanE/ComplexMultistepImageEditing | NilanE | 2025-05-03T13:50:57Z | 102 | 4 | [
"task_categories:image-to-image",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"reasoning-datasets-competition"
] | [
"image-to-image"
] | 2025-04-28T23:37:27Z | null | ---
license: apache-2.0
task_categories:
- image-to-image
tags:
- reasoning-datasets-competition
---
# Overview
This dataset contains complex image editing reasoning chains, designed to give unified multimodal LLMs such as [Show-o](https://github.com/showlab/Show-o) and [Janus](https://github.com/deepseek-ai/Janus) the ability to reason using text and image tokens equally.
#### Dataset structure:
```
{
'source': A randomly sampled image from imgenet-1k,
'prompt': An edit prompt to be applied to the source image,
'edit_0..7': Intermediate generated images (see pipeline below for details),
'chat_log': A log of the conversation between a critique model and an image generation model (see pipeline),
'reasoning': The chat log rewritten as a self-critiquing multimodal reasoning chain
}
```
#### Motivation and uses:
This dataset aims to address a near-complete lack of interleaved unified multimodal datasets.
Most open unified multimodal LLMs are trained on image understanding and image generation tasks separately. During inference, they are unable to understand and generate interleaved content or perform any-to-any transformations, removing some of the key selling points of unified architectures.
Currently, only proprietary models such as ChatGPT's native image generation model and Gemini 2.0 flash image-generation are proficient in simultaneous understanding and generation of images and text.
And with the recent rise of reasoning models, this leads to a new question:
Is it possible to have a model reason effectively using images as well as text, and if so, what sort of benefits would such a method have?
The areas this dataset aims to address are as follows:
1. The data discrepancy of open interleaved text-image datasets.
2. Entering a new field of interleaved multimodal reasoning datasets.
3. In general, add to the research field of unified multimodal models.
This dataset is part of a larger project to create a fully-open unified image and video LLM. Current work on an extreme-compression video tokenizer can be found [here](https://github.com/NilanEkanayake/TiTok-Video).
# Creation

#### The pipeline:
1. Send a source image to gemini 2.0 flash and have it write a complex image transformation/edit to apply.
2. Send that source image and the resulting edit request to 2.0 flash image-gen, and have it generate an image that attempts to fulfil the request.
3. Send the generated image back to 2.0 flash, along with all prior inputs and responses (source image, prompt, edit request, ...) and new instructions to critique how well the generated image obeys the request given.
4. Flash image-gen takes the critique as well as the prior inputs and generations (source image, edit request, generated image, critique, ...), and tries to fulfil the edit request again, with the benefit of context/feedback.
5. Repeat steps 3 and 4, adding new critiques and generations to the models' contexts until either the conversation gets too long (fail), or 2.0 flash is satisfied that the latest generation meets its requirements (success).
6. Send the successful conversations to 2.5 flash to be turned into reasoning traces.
#### Making your own dataset:
##### Setup:
```bash
git clone https://huggingface.co/datasets/NilanE/ComplexMultistepImageEditing
pip install -U jsonlines datasets google-genai
```
Then add your Gemini API key to [create_dataset.py](create_dataset.py), and tweak the other variables as desired.
##### Operation:
Run the script with:
```bash
python3 create_dataset.py
```
The dataset will be saved as a JSONL file along with logged images from the conversations.
To convert the dataset to a format supported by the HF dataset viewer, use [to_hf.py](to_hf.py).
#### Notes:
- The rationale behind using 2.5 flash for reasoning formatting is that it is both aware of reasoning and can have its own reasoning disabled (no thinking budget), which avoids inflated processing times and overcomplication of the generated reasoning chains.
- The source images were sampled from imagenet-1k (refer to imagenet's license for usage restrictions). Everything else is apache-2.0.
- The code for dataset creation was not thoroughly tested after being rewritten for publication. If you run into any issues, open a discussion with the problem.
# Limitations
- The dataset in its current iteration is too small for most use-cases. However, those needing a larger version can create their own using the included code (see above).
- The dataset has a limited scope, only covering image editing.
- Only a single interleaved image generation model (2.0 flash image-gen) was used, due to no other ones being freely available via API. This limits variety in the edited images.
- The generated image edits are not inherently progressive, and thus not fully aligned with step-by-step reasoning. However, progressive edits do appear in the dataset as the critique model tries to guide the generation model (see pipeline).
- Related to the point above, the reasoning chains in the dataset are derived from conversations between the critique and generation models. As such, they may not properly represent logical reasoning.
- The edit requests from 2.0 flash tend to have limited themes and general originality.
# Citation
```
@misc{complex-multistep-image-editing-dataset,
title={Complex Multistep Image-Editing Dataset},
author={Nilan Ekanayake},
year={2025},
url={https://huggingface.co/datasets/NilanE/ComplexMultistepImageEditing}
}
``` |
KhatirS/CosmeticDataset | KhatirS | 2025-05-03T13:50:03Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T13:33:17Z | null | ---
license: apache-2.0
---
|
jdchang/qsharp-full-mixture-1.5b-filtered-with-labels | jdchang | 2025-05-03T13:42:18Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T13:23:32Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: reward
sequence: bool
- name: roll_in_ids
sequence:
sequence: int32
- name: roll_outs_ids
sequence:
sequence: int32
- name: processed_answer
sequence: string
- name: labels
sequence: int32
splits:
- name: train
num_bytes: 86665084946
num_examples: 44509
download_size: 22663489948
dataset_size: 86665084946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_0_for_gen_12 | HungVu2003 | 2025-05-03T13:41:11Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T13:41:10Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 5179935
num_examples: 12500
download_size: 1805132
dataset_size: 5179935
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NextGenC/synapse-set-50k | NextGenC | 2025-05-03T13:36:06Z | 9 | 1 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"language:tr",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"bci",
"eeg",
"brain-computer-interface",
"neuroscience",
"signal-processing",
"nlp",
"instruction-tuning",
"synthetic-data"
] | [
"text-generation",
"text2text-generation"
] | 2025-05-01T10:19:24Z | null | ---
license: mit
task_categories:
- text-generation
- text2text-generation
language:
- en
- tr
tags:
- bci
- eeg
- brain-computer-interface
- neuroscience
- signal-processing
- nlp
- instruction-tuning
- synthetic-data
pretty_name: 'SynapseSet-50K: EEG Interpretation Dataset'
size_categories:
- 10K<n<100K
datasets:
- NextGenC/synapse-set-50k
---
## ๐ง SynapseSet-50K
**SynapseSet-50K** is a synthetic instruction-tuning dataset crafted to simulate EEG-based neurological state interpretation for natural language models. Each sample reflects brain signal metrics with contextual metadata, and an expert-style medical NLP explanation.
This dataset was generated by **7enn Labs** and aims to bridge neuroscience signal interpretation with instruction-tuned NLP systems.
> ๐ฌ 100% synthetic, non-clinical data. Intended for academic and research purposes only.The right of use for the creation of this data has been created with algorithms belonging to our confidential party.
> ๐ฌ The potential of this data set is quite large, so we expect feedback from you. As 7enn Labs, we are thinking of developing models for data augmentation and accuracy of the dataset.
> ๐ฌ "This endeavor represents more than a mere product; it constitutes foundational infrastructure, a testament to our vision. At 7enn Labs, we regard this dataset as a significant milestone on a much broader strategic journey. Its importance lies not only in the algorithmic generation of the synthetic data itself but fundamentally in the power and continuous evolution of the proprietary data engine developed by 7enn Labs. Whether immediately recognized or widely adopted, systems of this nature are poised to shape the future. The tools we forge today are the very foundations upon which tomorrow's breakthroughs will be built."
---
## โ ๏ธ Disclaimer & Legal Notice (7een Labs)
**100% synthetic, non-clinical data. Intended strictly for academic and research use.**
The datasets provided (SynapseSet series) are fully artificial and generated through proprietary simulation algorithms owned and controlled by a confidential party affiliated with 7een Labs. These datasets **do not represent real patient data** and **must not be used** for clinical decision-making, diagnostic purposes, or any application involving human subjects in real-world scenarios.
> ๐ **7een Labs accepts no liability or responsibility** for any outcome, misuse, or legal consequence arising from the use, distribution, or interpretation of this data or any derivative works. Full responsibility lies with the end user.
By accessing or utilizing any portion of these datasets, you **agree to waive any claim against 7een Labs** and acknowledge that all risk and responsibility rests solely with you.
Use it smart โ own the risk.
---
## ๐งฌ Dataset Format
Each sample contains:
- `instruction`: Task description for the model
- `input`: EEG signal metrics with patient metadata
- `output`: Simulated clinical explanation
```json
{
"instruction": "Interpret the given EEG values for a patient and explain their mental state.",
"input": "Patient: ID#A7421 | Age: 38 | Date: 2024-10-12 | EEG: Alpha=9.8Hz, Beta=17.2Hz, Theta=4.1Hz, Delta=2.0Hz, Gamma=29.5Hz | Voltage=0.72mV",
"output": "The EEG profile is consistent with relaxed wakefulness. Alpha wave dominance (9.8Hz) suggests the patient is in a calm, eyes-closed resting state. No signs of seizure activity or abnormal slowing are present."
}
```
---
- **Language:** English
- **Tone:** It's clinical-style, but it suits the 7enn Labs neutral look.
---
## ๐ Feature Comparison
| Feature | SynapseSet-10K | SynapseSet-50K | SynapseSet-100K |
|------------------------|----------------------------|----------------------------|----------------------------|
| ๐ Example Capacity | 10,000 | 50,000 | 100,000 |
| ๐ฃ๏ธ Language | Turkish | English | English |
| ๐ง Neurological Conditions | 16 | 25+ | 50+ |
| ๐ EEG Bands | 5 basic bands | 5 basic + 6 sub-bands | 5 basic + 11 sub-bands |
| ๐ Data Formats | 4 types | 6 types | 6 types (enhanced) |
| ๐ฌ Realism Level | Basic | Intermediate | Clinical-grade |
| ๐ค Patient Modeling | Simple | Advanced | Comprehensive medical profile |
| ๐ Artifact Modeling | None | Basic | Comprehensive (12+ types) |
---
## ๐ Licensing & Ethics
- License: [MIT](https://opensource.org/license/mit/)
- You must clearly disclose use of synthetic data
- Not to be used for clinical decision-making
- Use at your own risk; no warranties provided
---
## ๐ Citation
If you use **SynapseSet-50K**, please cite:
```bibtex
@misc{7ennlabs2025synapseset,
author = {7enn Labs},
title = {SynapseSet-50K: Synthetic Instruction Dataset for EEG Interpretation},
year = {2025},
url = {https://huggingface.co/datasets/NextGenC/synapse-set-50k},
note = {100% synthetic dataset for BCI/NLP research}
}
```
---
## ๐งช Example Usage
```python
from datasets import load_dataset
dataset = load_dataset("DATASET_FILE_NAME")
print(dataset["train"][0])
```
---
## ๐งโ๐ป Created by
**7enn Labs**
--- |
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_0_for_gen_14 | HungVu2003 | 2025-05-03T12:49:17Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T12:49:14Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 9087261
num_examples: 12500
download_size: 1921127
dataset_size: 9087261
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Kyleyee/trin_data_tldr_explicit_dataset | Kyleyee | 2025-05-03T12:23:45Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"trl"
] | [] | 2025-05-03T12:23:38Z | null | ---
tags:
- trl
---
# TL;DR Dataset for Preference Learning
## Summary
The TL;DR dataset is a processed version of Reddit posts, specifically curated to train models using the [TRL library](https://github.com/huggingface/trl) for preference learning and Reinforcement Learning from Human Feedback (RLHF) tasks. It leverages the common practice on Reddit where users append "TL;DR" (Too Long; Didn't Read) summaries to lengthy posts, providing a rich source of paired text data for training models to understand and generate concise summaries.
## Data Structure
- **Format**: [Standard](https://huggingface.co/docs/trl/main/dataset_formats#standard)
- **Type**: [Preference](https://huggingface.co/docs/trl/main/dataset_formats#preference)
Columns:
- `"prompt"`: The unabridged Reddit post.
- `"chosen"`: The concise "TL;DR" summary appended by the author.
- `"rejected"`: An alternative summary or response that was not selected.
This structure enables models to learn the relationship between detailed content and its abbreviated form, enhancing their summarization capabilities.
## Generation script
The script used to generate this dataset can be found [here](https://github.com/huggingface/trl/blob/main/examples/datasets/tldr_preference.py).
|
DT4LM/debertav3base_mr_multi_pso_timeDecay | DT4LM | 2025-05-03T12:19:50Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T12:17:00Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 58604
num_examples: 441
download_size: 41872
dataset_size: 58604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DIaac/m1-1k-tokenized-v3-all-formats-knowledge-enhanced-sft-0503 | DIaac | 2025-05-03T12:11:25Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T12:10:35Z | null | ---
dataset_info:
features:
- name: answer_idx
dtype: int64
- name: source
dtype: string
- name: metadata
dtype: string
- name: prompt
dtype: string
- name: answer_letter
dtype: string
- name: answer_string
dtype: string
- name: reasoning
dtype: string
- name: distilled_answer_string
dtype: string
- name: domain_code
dtype: string
- name: domain_name
dtype: string
- name: text
dtype: string
- name: raw_action_chain
dtype: string
- name: clean_action_chain_json
struct:
- name: reasoning_chain
list:
- name: action
dtype: string
- name: description
dtype: string
- name: step
dtype: int64
- name: clean_action_chain
sequence: string
- name: formatted_action_chain
dtype: string
- name: rebuilt_cot_simple_with_marker
dtype: string
- name: rebuilt_cot_simple_without_marker
dtype: string
- name: rebuilt_cot_json_with_marker
dtype: string
- name: rebuilt_cot_json_without_marker
dtype: string
- name: original_text
dtype: string
- name: action_guided_text
dtype: string
- name: json_guided_text
dtype: string
- name: simple_without_marker_text
dtype: string
- name: simple_with_marker_text
dtype: string
- name: json_without_marker_text
dtype: string
- name: json_with_marker_text
dtype: string
- name: entity_extract_response
dtype: string
- name: extracted_entities
list:
- name: id
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: entity_links
list:
- name: links
list:
- name: name
dtype: string
- name: score
dtype: float64
- name: mention
dtype: string
- name: type
dtype: string
- name: unfiltered_triplets
sequence:
sequence: string
- name: filter_response
dtype: string
- name: clean_filtered_triplets_json
struct:
- name: relevant_triplets
list:
- name: explanation
dtype: string
- name: id
dtype: int64
- name: relevance
dtype: string
- name: filtered_triplets
sequence:
sequence: string
- name: knowledge_enhanced_rebuilt_cot_simple_with_marker
dtype: string
- name: knowledge_enhanced_rebuilt_cot_simple_without_marker
dtype: string
- name: knowledge_enhanced_rebuilt_cot_json_with_marker
dtype: string
- name: knowledge_enhanced_rebuilt_cot_json_without_marker
dtype: string
- name: knowledge_enhanced_simple_without_marker_text
dtype: string
- name: knowledge_enhanced_simple_with_marker_text
dtype: string
- name: knowledge_enhanced_json_without_marker_text
dtype: string
- name: knowledge_enhanced_json_with_marker_text
dtype: string
splits:
- name: train
num_bytes: 129433764
num_examples: 1000
download_size: 56096321
dataset_size: 129433764
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
scuti0/extension-test | scuti0 | 2025-05-03T12:10:54Z | 78 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2024-06-29T11:45:39Z | null | ---
license: apache-2.0
---
|
willnorris/cylinder-in-box-hollows | willnorris | 2025-05-03T11:36:48Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-03T10:20:58Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 366,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.cam1": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam2": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
]
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Geralt-Targaryen/Children-zh | Geralt-Targaryen | 2025-05-03T11:36:21Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T08:13:47Z | null | ---
license: apache-2.0
---
|
dgambettaphd/D_llm2_gen3_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-03T11:28:02Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T11:27:58Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 10852981
num_examples: 19000
download_size: 6422251
dataset_size: 10852981
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SciKnowOrg/ontolearner-events | SciKnowOrg | 2025-05-03T11:23:44Z | 0 | 0 | [
"language:en",
"license:mit",
"region:us",
"OntoLearner",
"ontology-learning",
"events"
] | [] | 2025-05-03T11:23:40Z | null |
---
license: mit
language:
- en
tags:
- OntoLearner
- ontology-learning
- events
pretty_name: Agricultural
---
<div>
<img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
style="display: block; margin: 0 auto; width: 500px; height: auto;">
<h1 style="text-align: center; margin-top: 1em;">Events Domain Ontologies</h1>
</div>
## Overview
The events domain encompasses ontologies designed to systematically represent and model occurrences, temporal intervals, and scheduling constructs within a structured framework. This domain is pivotal in knowledge representation as it facilitates the semantic integration and interoperability of event-related data across diverse systems, enabling precise temporal reasoning and efficient management of calendar-based activities. By providing a formalized vocabulary and structure, it enhances the ability to query, share, and analyze event information in a coherent and meaningful manner.
## Ontologies
| Ontology ID | Full Name | Classes | Properties | Last Updated |
|-------------|-----------|---------|------------|--------------|
| Conference | Conference Ontology (Conference) | 42 | 52 | 2016/04/30|
| iCalendar | iCalendar Vocabulary (iCalendar) | 54 | 49 | 2004/04/07|
| LODE | Linking Open Descriptions of Events (LODE) | 1 | 7 | 2020-10-31|
## Dataset Files
Each ontology directory contains the following files:
1. `<ontology_id>.<format>` - The original ontology file
2. `term_typings.json` - Dataset of term to type mappings
3. `taxonomies.json` - Dataset of taxonomic relations
4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
5. `<ontology_id>.rst` - Documentation describing the ontology
## Usage
These datasets are intended for ontology learning research and applications.
|
KBayoud/Darija-VLM-GQA-Dataset | KBayoud | 2025-05-03T11:22:27Z | 24 | 3 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-03T22:35:24Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: darija_qa
list:
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: question
dtype: string
- name: english_qa
list:
- name: answer
dtype: string
- name: fullAnswer
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 152505691.5
num_examples: 3092
download_size: 143831984
dataset_size: 152505691.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Original dataset : [vikhyatk/gqa-val](https://huggingface.co/datasets/vikhyatk/gqa-val) |
kothasuhas/llp-gold-37m-1.5m_T2048.0 | kothasuhas | 2025-05-03T11:12:53Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T11:10:34Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
- name: num_tokens
dtype: float32
- name: log_weight
dtype: float64
splits:
- name: train
num_bytes: 3605804917.0
num_examples: 1500000
download_size: 189296120
dataset_size: 3605804917.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
severo/trending-repos | severo | 2025-05-03T11:04:42Z | 567 | 12 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"croissant"
] | [] | 2023-07-28T13:57:34Z | null | ---
license: apache-2.0
pretty_name: Trending repositories on Hugging Face
size_categories:
- n<1K
configs:
- config_name: models
data_files: "models.csv"
- config_name: datasets
data_files: "datasets.csv"
- config_name: spaces
data_files: "spaces.csv"
tags:
- croissant
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Sylvain Lesage
### Dataset Summary
This dataset contains the 20 trending repositories of each type: models, datasets, and space, on Hugging Face, every day. Each type can be loaded from its own dataset config.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Not relevant.
## Dataset Structure
### Data Instances
The dataset contains three configurations:
**models**: the history of trending models on Hugging Face
**datasets**: the history of trending datasets on Hugging Face
**spaces**: the history of trending spaces on Hugging Face
### Data Fields
- date (string): the date of the lookup to the trending repositories
- author (string): id of the repository owner. It can be null.
- id (string): id of the repository
- rank (int64): rank in the trending repositories of its kind (model, dataset, or space). Starts at 1.
- recent_likes (int64): number of likes received lately (last week)
- likes (int64): total number of likes
- month_downloads (int64): number of downloads in the last month. Null for the spaces.
### Data Splits
Each configuration only has one split: `train` that contains all the rows.
## Dataset Creation
### Curation Rationale
The dataset is updated daily through a cron job that calls the `https://huggingface.co/api/trending?type=${repoType}&limit=20` endpoint for each repository type (model, dataset, space). The script runs in an [Observable](https://observablehq.com/@huggingface) notebook, and the files are uploaded using the [huggingface.js](https://github.com/huggingface/huggingface.js) library.
### Source Data
#### Initial Data Collection and Normalization
Not relevant.
#### Who are the source language producers?
Not relevant.
### Annotations
#### Annotation process
Not relevant.
#### Who are the annotators?
Not relevant.
### Personal and Sensitive Information
Only public repositories are included in the trending repositories.
## Considerations for Using the Data
### Social Impact of Dataset
Not relevant.
### Discussion of Biases
The trending repositories reflect the likes given by Hugging Face users in the last week. Any bias that applies to the users can be reflected in this dataset. As a vanity metric, some users might also be tempted to generate fake likes.
### Other Known Limitations
Not relevant.
## Additional Information
### Dataset Curators
Sylvain Lesage, Hugging Face
### Licensing Information
Apache License 2.0
### Citation Information
Not relevant.
### Contributions
Not relevant.
|
MBZUAI-IFM/chess_final | MBZUAI-IFM | 2025-05-03T10:36:48Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T21:21:39Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: index
dtype: int64
- name: question
dtype: string
- name: cot
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 534859
num_examples: 49
download_size: 257991
dataset_size: 534859
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.