datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
parlange/dark-energy-survey-supernova | parlange | 2025-04-17T05:16:35Z | 13,843 | 1 | [
"license:cc",
"region:us"
] | [] | 2025-03-16T08:00:01Z | 0 | ---
license: cc
homepage: https://portal.nersc.gov/project/dessn/autoscan/
datasets:
- name: autoscan
---
# autoscan
## Dataset Summary
This dataset provides the training data for the autoscan algorithm described in Goldstein et al. (2015) for automated transient identification in the Dark Energy Survey Supernova program (DES-SN).
It includes both feature measurements and postage stamp images for 898,963 detections.
For more details, visit the [autoscan Project Homepage](https://portal.nersc.gov/project/dessn/autoscan/).
## Dataset Description
The dataset was organized into two primary components:
1. **Features**
- **File:** `autoscan_features.3.csv` (440MB)
- **Content:** Contains class labels and 38 features (as detailed in Table 2 of Goldstein et al. 2015) computed over each detection. The file begins with a header that describes its structure.
2. **Images**
- **Content:** Postage stamp images in both FITS and GIF formats.
- **Organization:** The images were originally divided into 11 chunks provided as tar archives (as in autoscan website).
- **File Sizes:**
- Chunk 0: `stamps_0.tar` (5.6GB)
- Chunk 1: `stamps_1.tar` (5.6GB)
- Chunk 2: `stamps_2.tar` (5.6GB)
- Chunk 3: `stamps_3.tar` (5.6GB)
- Chunk 4: `stamps_4.tar` (5.6GB)
- Chunk 5: `stamps_5.tar` (5.6GB)
- Chunk 6: `stamps_6.tar` (5.6GB)
- Chunk 7: `stamps_7.tar` (5.6GB)
- Chunk 8: `stamps_8.tar` (5.6GB)
- Chunk 9: `stamps_9.tar` (5.6GB)
- Chunk 10: `stamps_10.tar` (254MB)
The uncompressed images are reorganized into three main directories: `template`, `search`, and `difference`. Each of these directories further contains subfolders for `bogus` and `real` detections.
## Data Format for Binary Classification
- **Features:** CSV format with headers explaining the 38 features and class labels.
- **Images:** Tar archives uncompressed contain triplets organized into `template`, `search`, and `difference` folders with `bogus` and `real` class subdirectories.
- `autoscan_training_data.zip`
- `template/`
-- `bogus/`
-- `real/`
- `search/`
-- `bogus/`
-- `real/`
- `difference/`
-- `bogus/`
-- `real/`
<img src="autoscan.png" alt="autoscan" width="900">
## Citation
- D. A. Goldstein, et al. 2015 "Automated Transient Identification in the Dark Energy Survey" AJ (accepted). |
Thanarit/Thai-Voice-Test-Viewer-Fix | Thanarit | 2025-06-02T15:16:34Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-02T15:11:16Z | 0 | ---
dataset_info:
features:
- name: ID
dtype: string
- name: speaker_id
dtype: string
- name: Language
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
- name: length
dtype: float32
- name: dataset_name
dtype: string
- name: confidence_score
dtype: float64
splits:
- name: train
num_examples: 120
download_size: 0
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train/*.parquet
---
# Thanarit/Thai-Voice
Combined Thai audio dataset from multiple sources
## Dataset Details
- **Total samples**: 120
- **Total duration**: 0.13 hours
- **Language**: Thai (th)
- **Audio format**: 16kHz mono WAV
- **Volume normalization**: -20dB
## Sources
Processed 1 datasets in streaming mode
## Source Datasets
1. **GigaSpeech2**: Large-scale multilingual speech corpus
## Usage
```python
from datasets import load_dataset
# Load with streaming to avoid downloading everything
dataset = load_dataset("Thanarit/Thai-Voice-Test-Viewer-Fix", streaming=True)
# Iterate through samples
for sample in dataset['train']:
print(sample['ID'], sample['transcript'][:50])
# Process audio: sample['audio']
break
```
## Schema
- `ID`: Unique identifier (S1, S2, S3, ...)
- `speaker_id`: Speaker identifier (SPK_00001, SPK_00002, ...)
- `Language`: Language code (always "th" for Thai)
- `audio`: Audio data with 16kHz sampling rate
- `transcript`: Text transcript of the audio
- `length`: Duration in seconds
- `dataset_name`: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice")
- `confidence_score`: Confidence score of the transcript (0.0-1.0)
- 1.0: Original transcript from source dataset
- <1.0: STT-generated transcript
- 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT])
## Processing Details
This dataset was created using streaming processing to handle large-scale data without requiring full downloads.
Audio has been standardized to 16kHz mono with -20dB volume normalization.
|
Toumimohameddhia/trocr1-medicaments | Toumimohameddhia | 2025-03-01T12:59:10Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-25T09:18:21Z | 0 | ---
dataset_info:
features:
- name: image_path
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 236295.16669417397
num_examples: 5453
- name: test
num_bytes: 26259.833305826043
num_examples: 606
download_size: 115868
dataset_size: 262555.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gigant/tib_oreo_1k | gigant | 2024-11-30T16:03:07Z | 8 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-30T16:02:59Z | 0 | ---
dataset_info:
features:
- name: doi
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: video_url
dtype: string
- name: license
dtype: string
- name: subject
dtype: string
- name: genre
dtype: string
- name: release_year
dtype: string
- name: author
dtype: string
- name: contributors
dtype: string
- name: abstract
dtype: string
- name: transcript
dtype: string
- name: transcript_segments
sequence:
- name: id
dtype: int32
- name: seek
dtype: int32
- name: start
dtype: float32
- name: end
dtype: float32
- name: text
dtype: string
- name: tokens
sequence: int32
- name: temperature
dtype: float32
- name: avg_logprob
dtype: float32
- name: compression_ratio
dtype: float32
- name: no_speech_prob
dtype: float32
- name: keyframes
sequence:
- name: slide
dtype: string
- name: frames
sequence: int32
- name: timestamp
sequence: float32
- name: language
dtype: string
- name: split_text
sequence: string
- name: labels
sequence: float64
- name: oreo_extoracle
dtype: string
splits:
- name: train
num_bytes: 149058229
num_examples: 1000
download_size: 72700586
dataset_size: 149058229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iambestfeed/synthetic-wiki | iambestfeed | 2025-06-23T13:49:53Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-23T13:49:37Z | 0 | ---
dataset_info:
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: processed_generate_query
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 181851544
num_examples: 98886
download_size: 92825566
dataset_size: 181851544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
doxa-friend/cot-chartqa_train | doxa-friend | 2025-06-10T08:45:48Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-10T08:45:46Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
list:
- name: index
dtype: int64
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: images
sequence: string
splits:
- name: train
num_bytes: 15313765
num_examples: 22561
download_size: 4869397
dataset_size: 15313765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Forcewithme/mkldsajsers | Forcewithme | 2024-12-04T15:53:47Z | 19 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-12-04T15:28:06Z | 0 | ---
license: apache-2.0
---
|
antoine-444/ai2_arc_dataset | antoine-444 | 2025-05-30T20:44:12Z | 52 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-30T20:41:03Z | 0 | ---
license: mit
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: rationale
dtype: 'null'
- name: answer
dtype: string
splits:
- name: train
num_bytes: 330156
num_examples: 1119
- name: test
num_bytes: 355036
num_examples: 1172
- name: validation
num_bytes: 91307
num_examples: 299
download_size: 436525
dataset_size: 776499
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
KcRiD/so100_test | KcRiD | 2025-04-30T12:26:52Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-04-30T12:26:45Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1040,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
french-datasets/michsethowusu_french-malagasy_sentence-pairs | french-datasets | 2025-05-20T10:02:33Z | 0 | 0 | [
"language:fra",
"language:mlg",
"region:us"
] | [] | 2025-05-20T10:01:57Z | 0 | ---
language:
- fra
- mlg
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [michsethowusu/french-malagasy_sentence-pairs](https://huggingface.co/datasets/michsethowusu/french-malagasy_sentence-pairs). |
dirganmdcp/yfinance_Indonesia_Stock_Exchange | dirganmdcp | 2025-03-12T02:05:32Z | 51 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-03-12T02:05:32Z | 0 | ---
license: apache-2.0
---
|
haorandai/Dec30_Clean_Bicycle_UF_10samples_5constraints | haorandai | 2024-12-31T07:17:24Z | 41 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-31T07:17:22Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1635746.0
num_examples: 15
download_size: 613972
dataset_size: 1635746.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/gemma-2-9b-it-refusal-attack-gen3-10-HeX-PHI | jkazdan | 2025-01-06T05:22:24Z | 59 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-06T05:22:23Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 433069
num_examples: 300
download_size: 230663
dataset_size: 433069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fabian-w/emea_en_ru_synthetic | fabian-w | 2025-04-25T21:53:56Z | 41 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-25T21:53:51Z | 0 | ---
dataset_info:
features:
- name: en
dtype: string
- name: ru
dtype: string
splits:
- name: train
num_bytes: 19587528
num_examples: 91991
download_size: 7009273
dataset_size: 19587528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/CUADLiquidatedDamagesLegalBenchClassification | mteb | 2025-05-06T11:54:48Z | 0 | 0 | [
"task_categories:text-classification",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"language:eng",
"license:cc-by-4.0",
"modality:text",
"arxiv:2308.11462",
"arxiv:2103.06268",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T11:54:44Z | 0 | ---
annotations_creators:
- expert-annotated
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3111
num_examples: 6
- name: test
num_bytes: 80029
num_examples: 220
download_size: 47133
dataset_size: 83140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CUADLiquidatedDamagesLegalBenchClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
This task was constructed from the CUAD dataset. It consists of determining if the clause awards either party liquidated damages for breach or a fee upon the termination of a contract (termination fee).
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Legal, Written |
| Reference | https://huggingface.co/datasets/nguha/legalbench |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["CUADLiquidatedDamagesLegalBenchClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{guha2023legalbench,
archiveprefix = {arXiv},
author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
eprint = {2308.11462},
primaryclass = {cs.CL},
title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
year = {2023},
}
@article{hendrycks2021cuad,
author = {Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},
journal = {arXiv preprint arXiv:2103.06268},
title = {Cuad: An expert-annotated nlp dataset for legal contract review},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("CUADLiquidatedDamagesLegalBenchClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 220,
"number_of_characters": 77388,
"number_texts_intersect_with_train": 0,
"min_text_length": 70,
"average_text_length": 351.76363636363635,
"max_text_length": 2526,
"unique_text": 220,
"unique_labels": 2,
"labels": {
"1": {
"count": 110
},
"0": {
"count": 110
}
}
},
"train": {
"num_samples": 6,
"number_of_characters": 3039,
"number_texts_intersect_with_train": null,
"min_text_length": 163,
"average_text_length": 506.5,
"max_text_length": 681,
"unique_text": 6,
"unique_labels": 2,
"labels": {
"1": {
"count": 3
},
"0": {
"count": 3
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
qfq/train_rawcot_o1_preview_noanswer | qfq | 2024-11-28T03:07:04Z | 19 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-28T02:56:46Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: attempt
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
sequence: string
splits:
- name: train
num_bytes: 8472226
num_examples: 1146
download_size: 3811067
dataset_size: 8472226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yuyeong/rw_pubmed_mdlr_2_mask | Yuyeong | 2025-04-22T11:10:07Z | 15 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-22T10:40:12Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
- name: group_idx
dtype: int64
- name: node_idx
dtype: int64
splits:
- name: train_seed0
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed0
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed0
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed1
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed1
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed1
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed2
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed2
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed2
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed3
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed3
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed3
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed4
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed4
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed4
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed5
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed5
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed5
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed6
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed6
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed6
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed7
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed7
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed7
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed8
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed8
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed8
num_bytes: 3285565.4104660954
num_examples: 1972
- name: train_seed9
num_bytes: 2627952495.9067807
num_examples: 1577300
- name: validation_seed9
num_bytes: 3285565.4104660954
num_examples: 1972
- name: test_seed9
num_bytes: 3285565.4104660954
num_examples: 1972
download_size: 13234228718
dataset_size: 26345236267.277115
configs:
- config_name: default
data_files:
- split: train_seed0
path: data/train_seed0-*
- split: validation_seed0
path: data/validation_seed0-*
- split: test_seed0
path: data/test_seed0-*
- split: train_seed1
path: data/train_seed1-*
- split: validation_seed1
path: data/validation_seed1-*
- split: test_seed1
path: data/test_seed1-*
- split: train_seed2
path: data/train_seed2-*
- split: validation_seed2
path: data/validation_seed2-*
- split: test_seed2
path: data/test_seed2-*
- split: train_seed3
path: data/train_seed3-*
- split: validation_seed3
path: data/validation_seed3-*
- split: test_seed3
path: data/test_seed3-*
- split: train_seed4
path: data/train_seed4-*
- split: validation_seed4
path: data/validation_seed4-*
- split: test_seed4
path: data/test_seed4-*
- split: train_seed5
path: data/train_seed5-*
- split: validation_seed5
path: data/validation_seed5-*
- split: test_seed5
path: data/test_seed5-*
- split: train_seed6
path: data/train_seed6-*
- split: validation_seed6
path: data/validation_seed6-*
- split: test_seed6
path: data/test_seed6-*
- split: train_seed7
path: data/train_seed7-*
- split: validation_seed7
path: data/validation_seed7-*
- split: test_seed7
path: data/test_seed7-*
- split: train_seed8
path: data/train_seed8-*
- split: validation_seed8
path: data/validation_seed8-*
- split: test_seed8
path: data/test_seed8-*
- split: train_seed9
path: data/train_seed9-*
- split: validation_seed9
path: data/validation_seed9-*
- split: test_seed9
path: data/test_seed9-*
---
|
lt-s/so100_train_move_red_block_tray_to_red_dish_flip | lt-s | 2025-06-10T07:29:51Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"train"
] | [
"robotics"
] | 2025-06-10T02:48:28Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- LeRobot
- train
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 4846,
"total_tasks": 1,
"total_videos": 40,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.center_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.right_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Thierrix/filtered_rag_docs | Thierrix | 2025-06-03T14:08:40Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T14:08:39Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 54813
num_examples: 100
download_size: 33947
dataset_size: 54813
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
21uyennt/bahnar | 21uyennt | 2025-05-20T10:39:00Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-20T10:38:37Z | 0 | ---
dataset_info:
- config_name: en-ba
features:
- name: translation
struct:
- name: ba
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 4738409
num_examples: 21801
- name: validation
num_bytes: 518867
num_examples: 2423
- name: test
num_bytes: 187503
num_examples: 1000
download_size: 3098061
dataset_size: 5444779
- config_name: en-vi
features:
- name: translation
struct:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: train
num_bytes: 3341942
num_examples: 18719
download_size: 2013905
dataset_size: 3341942
configs:
- config_name: en-ba
data_files:
- split: train
path: en-ba/train-*
- split: validation
path: en-ba/validation-*
- split: test
path: en-ba/test-*
- config_name: en-vi
data_files:
- split: train
path: en-vi/train-*
---
|
BurakkTalha/programming-languages-and-frameworks-alpaca | BurakkTalha | 2025-04-14T08:45:29Z | 24 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T08:41:19Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 34769479.45720831
num_examples: 148497
- name: test
num_bytes: 3863353.542791687
num_examples: 16500
download_size: 10024776
dataset_size: 38632833.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Charan-1007/new_dataset | Charan-1007 | 2025-02-03T10:30:53Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-03T10:30:47Z | 0 | ---
dataset_info:
features:
- name: Question_ID
dtype: int64
- name: Question_text
dtype: string
- name: answer_text
dtype: string
- name: Module
dtype: int64
- name: Assessment_Type
dtype: string
- name: Subject
dtype: string
- name: Question_ALT
dtype: string
- name: Question_URL
dtype: string
- name: Answer_URL
dtype: string
- name: User_ID
dtype: string
splits:
- name: train
num_bytes: 1386271
num_examples: 3695
download_size: 600908
dataset_size: 1386271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Qipei/eval_act_task_picbrick0 | Qipei | 2025-05-15T15:08:47Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-15T15:08:36Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_mobile",
"total_episodes": 5,
"total_frames": 923,
"total_tasks": 1,
"total_videos": 15,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
16
],
"names": [
"linear_vel",
"angular_vel",
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
19
],
"names": [
"odom_x",
"odom_y",
"odom_theta",
"linear_vel",
"angular_vel",
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HumanoidTeam/aloha_cube_binary_old_format_v1_test_2 | HumanoidTeam | 2025-02-25T22:37:17Z | 17 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-02-25T22:37:10Z | 0 | ---
task_categories:
- robotics
tags:
- LeRobot
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
HappyAIUser/ATCgpt-Fixed | HappyAIUser | 2024-12-07T03:51:25Z | 21 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"conversational",
"instruction-tuning"
] | [
"text-generation",
"text2text-generation"
] | 2024-12-07T03:51:21Z | 0 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
size_categories:
- 1K<n<10K
tags:
- conversational
- instruction-tuning
---
# Dataset Card for ATCgpt-Fixed
This dataset contains instruction-input-output pairs converted to ShareGPT format, designed for instruction tuning and text generation tasks.
## Dataset Description
The dataset consists of carefully curated instruction-input-output pairs, formatted for conversational AI training. Each entry contains:
- An instruction that specifies the task
- An optional input providing context
- A detailed output that addresses the instruction
## Usage
This dataset is particularly suitable for:
- Instruction tuning of language models
- Training conversational AI systems
- Fine-tuning for specific domain knowledge
|
KellinP/filtered_selfplay_skeleton | KellinP | 2025-02-04T11:11:10Z | 24 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-04T11:07:58Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 681488
num_examples: 296
download_size: 319020
dataset_size: 681488
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Celiadraw/text-to-mermaid | Celiadraw | 2024-06-27T14:19:12Z | 58 | 5 | [
"task_categories:text-generation",
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2024-06-24T17:04:59Z | 1 | ---
task_categories:
- text-generation
language:
- en
pretty_name: text_to_mermaid
size_categories:
- 10M<n<100M
--- |
electricsheepafrica/Percentage-Of-15-Years-Old-Girls-Received-The-for-African-Countries | electricsheepafrica | 2025-06-21T13:39:27Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-21T13:39:25Z | 0 | ---
license: apache-2.0
---
# Percentage of 15 years old girls received the recommended doses of HPV vaccine for African Countries
## Dataset Description
This dataset contains 'Percentage of 15 years old girls received the recommended doses of HPV vaccine' data for all 54 African countries, sourced from the World Health Organization (WHO). The data is structured with years as rows and countries as columns, facilitating time-series analysis.
Missing values have been handled using linear interpolation followed by forward and backward filling to ensure a complete dataset.
## How to Use
You can load the data using pandas:
```python
import pandas as pd
df = pd.read_csv('hf://datasets/electricsheepafrica/Percentage-Of-15-Years-Old-Girls-Received-The-for-African-Countries/percentage_of_15_years_old_girls_received_the_recommended_doses_of_hpv_vaccine.csv')
print(df.head())
```
|
AdleBens/test7 | AdleBens | 2025-02-18T14:28:23Z | 40 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk1"
] | [
"robotics"
] | 2025-02-18T14:27:43Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk1
task_categories:
- robotics
---
# test7
**This dataset was generated using a [phospho dev kit](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
FrancophonIA/NTEU_French-Estonian | FrancophonIA | 2025-03-29T22:56:54Z | 10 | 0 | [
"task_categories:translation",
"language:est",
"language:fra",
"region:us"
] | [
"translation"
] | 2024-11-17T21:54:00Z | 0 | ---
language:
- est
- fra
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/7219/
## Description
This is a compilation of parallel corpora resources used in building of Machine Translation engines in NTEU project (Action number: 2018-EU-IA-0051). Data in this resource are compiled in two TMX files, two tiers grouped by data source reliablity. Tier A -- danta originating from human edited sources, translation memories and alike. Tier B -- danta originating created by automatic aligning parallel data from miscellaneous web and parallel documents sources.
The subsequent sections list all the sources contained in this parallel corpus.
Tier A:
-------
A parallel corpus "EAC-TM" collected for reuse from "Language Technology Resources page of EU Science Hub", https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory, licensed under CC-BY-4.0 license license and in accordance with EC Legal Disclaimer https://ec.europa.eu/info/legal-notice_en.
A parallel corpus "DGT-TM" collected for reuse from "Language Technology Resources page of EU Science Hub", https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory. The DGT-TM database is the exclusive property of the European Commission. The Commission cedes its non-exclusive rights free of charge and world-wide for the entire duration of the protection of those rights to the re-user, for all kinds of use which comply with the conditions laid down in the Commission Decision of 12 December 2011 on the re-use of Commission documents, published in Official Journal of the European Union L330 of 14 December 2011, pages 39 to 42. Any re-use of the database or of the structured elements contained in it is required to be identified by the re-user, who is under an obligation to state the source of the documents used: the website address, the date of the latest update and the fact that the European Commission retains ownership of the data.
A parallel corpus "ECDC-TM" as published under a non-standard license, Free reuse with attribution.
A parallel corpus "DCEP" as published under a non-standard license, reuse permitted, attribution required.
A glossary of terms corpus "IATE Terminology" collected for reuse from "IATE web site", as published under a non-standard license, reuse permitted, attribution required.
A parallel corpus "EU Constitution" collected for reuse from "OPUS web site", http://opus.nlpl.eu/EUconst.php as Public domain from OPUS.
A parallel corpus "JRC-Acquis" collected for reuse from "OPUS web site", http://opus.nlpl.eu/JRC-Acquis.php as Public domain from OPUS.
Tier B:
-------
A parallel corpus "OPUS - ECB" as Public domain from OPUS.
A parallel corpus "EU-Bookshop" collected for reuse from "OPUS web site", https://opus.nlpl.eu/EUbookshop.php as Public domain from OPUS.
A parallel corpus "Europarl v6" collected for reuse from "STATMT web site", https://www.statmt.org/europarl/archives.html#v6 as published as public domain (StatMT: "We are not aware of any copyright restrictions of the material").
A parallel corpus "OPUS - EMEA" collected for reuse from "OPUS web site", https://opus.nlpl.eu/ as published as public domain, no license assigned. Original data originating from "European Medicines Agency", https://www.ema.europa.eu/en.
## Citation
```
Compilation of Estonian-French parallel corpora resources used for training of NTEU Machine Translation engines. (2021). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/7219
```
|
enip2473/environmental-dialogue | enip2473 | 2024-12-26T05:13:21Z | 57 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-26T03:31:56Z | 0 | ---
dataset_info:
features:
- name: user_text
dtype: string
- name: machine_text
dtype: string
- name: user_voice
dtype: audio
- name: machine_voice
dtype: audio
splits:
- name: train
num_bytes: 56335863564.0
num_examples: 40000
download_size: 46160058432
dataset_size: 56335863564.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GitBag/llama3-ultrafeedback-reasoning-iter_3-1731243878-armo-tokenized | GitBag | 2024-11-13T05:34:45Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-13T05:34:25Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: augmented_prompt
dtype: string
- name: shared_thought
sequence: string
- name: chosen_ts
sequence: string
- name: chosen_ts_reward
dtype: float64
- name: reject_ts
sequence: string
- name: reject_ts_reward
dtype: float64
- name: augmented_prompt_llama
dtype: string
- name: augmented_prompt_llama_token
sequence: int64
- name: chosen_ts_llama
dtype: string
- name: chosen_ts_llama_token
sequence: int64
- name: reject_ts_llama
dtype: string
- name: reject_ts_llama_token
sequence: int64
splits:
- name: train
num_bytes: 2496893646.5009623
num_examples: 56154
- name: test
num_bytes: 44465107.49903769
num_examples: 1000
download_size: 403228395
dataset_size: 2541358754.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
test-gen/humaneval_qwen-7b-random_t0.0_n1_generated_tests_updated | test-gen | 2025-05-23T02:41:37Z | 27 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-15T02:40:25Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
- name: new_verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 239635
num_examples: 164
download_size: 100525
dataset_size: 239635
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mlfoundations-dev/numina_filtered | mlfoundations-dev | 2025-01-23T21:22:29Z | 13 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-23T16:49:27Z | 0 | ---
dataset_info:
features:
- name: source
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 439887997.359095
num_examples: 151522
download_size: 340049108
dataset_size: 439887997.359095
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MohamedAshraf701/query-response-dataset | MohamedAshraf701 | 2024-10-22T19:47:52Z | 17 | 2 | [
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"question&Answer"
] | [] | 2024-10-22T17:29:59Z | 0 | ---
license: apache-2.0
dataset_info:
features:
- name: Query
dtype: string
- name: ShortResponse
dtype: string
- name: DetailedResponse
dtype: string
splits:
- name: train
num_bytes: 181752
num_examples: 1426
download_size: 85206
dataset_size: 181752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
tags:
- question&Answer
pretty_name: Question & Answer
size_categories:
- 1K<n<10K
---
# Query Response Dataset
## Overview
The **Query Response Dataset** is designed to provide a rich set of question-answer pairs, ideal for training AI models in natural language processing (NLP) tasks. This dataset contains structured query-response data that can be utilized for various applications, including chatbots, virtual assistants, and customer support systems.
### Dataset Details
- **Number of Entries**: 1.5K
- **Fields**:
- **Query**: The question or inquiry made by a user.
- **ShortResponse**: A concise answer to the query.
- **DetailedResponse**: An expanded explanation or answer to the query.
## Purpose
This dataset is intended for researchers and developers who are building applications that require understanding and generating human-like responses to queries. It can be used to improve the performance of conversational AI systems and enhance user interactions.
## Features
- **Diverse Questions**: The dataset covers a wide range of topics, ensuring that models trained on this data can handle various user inquiries.
- **Structured Format**: The dataset is organized in a clear, structured format, making it easy to ingest and use in machine learning workflows.
## Usage
You can load this dataset using the `datasets` library from Hugging Face:
```python
from datasets import load_dataset
dataset = load_dataset("MohamedAshraf701/query-response-dataset")
```
### Example Usage
Here is a simple example of how to access data from the dataset:
```python
# Accessing the first entry
first_entry = dataset['train'][0]
print("Query:", first_entry['Query'])
print("Short Response:", first_entry['ShortResponse'])
print("Detailed Response:", first_entry['DetailedResponse'])
```
## Contributions
Contributions to this dataset are welcome! If you have additional questions or response pairs to add, feel free to open an issue or submit a pull request.
## License
This dataset is licensed under the [MIT License](LICENSE).
## Contact
For any inquiries or support, please reach out to [[email protected]](mailto:[email protected]). |
timpal0l/swedish_reviews | timpal0l | 2024-07-16T10:57:21Z | 64 | 5 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:sv",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 0 | ---
annotations_creators:
- found
language_creators:
- found
language:
- sv
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Swedish Reviews
dataset_info:
config_name: plain_text
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: test
num_bytes: 6296529
num_examples: 20697
- name: validation
num_bytes: 6359215
num_examples: 20696
- name: train
num_bytes: 18842863
num_examples: 62089
download_size: 19622770
dataset_size: 31498607
configs:
- config_name: plain_text
data_files:
- split: test
path: plain_text/test-*
- split: validation
path: plain_text/validation-*
- split: train
path: plain_text/train-*
default: true
---
# Dataset Card for Swedish Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [swedish_reviews homepage](https://github.com/timpal0l/swedish-sentiment)
- **Repository:** [swedish_reviews repository](https://github.com/timpal0l/swedish-sentiment)
- **Point of Contact:** [Tim Isbister](mailto:[email protected])
### Dataset Summary
The dataset is scraped from various Swedish websites where reviews are present. The dataset consists of 103 482 samples split between `train`, `valid` and `test`. It is a sample of the full dataset, where this sample is balanced to the minority class (negative). The original data dump was heavly skewved to positive samples with a 95/5 ratio.
### Supported Tasks and Leaderboards
This dataset can be used to evaluate sentiment classification on Swedish.
### Languages
The text in the dataset is in Swedish.
## Dataset Structure
### Data Instances
What a sample looks like:
```
{
'text': 'Jag tycker huggingface är ett grymt project!',
'label': 1,
}
```
### Data Fields
- `text`: A text where the sentiment expression is present.
- `label`: a int representing the label `0`for negative and `1`for positive.
### Data Splits
The data is split into a training, validation and test set. The final split sizes are as follow:
| Train | Valid | Test |
| ------ | ----- | ---- |
| 62089 | 20696 | 20697 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Various Swedish websites with product reviews.
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Swedish
### Annotations
[More Information Needed]
#### Annotation process
Automatically annotated based on user reviews on a scale 1-5, where 1-2 is considered `negative` and 4-5 is `positive`, 3 is skipped as it tends to be more neutral.
#### Who are the annotators?
The users who have been using the products.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
The corpus was scraped by @timpal0l
### Licensing Information
Research only.
### Citation Information
No paper exists currently.
### Contributions
Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset. |
unai-gurbindo/neurips-weather-dataset | unai-gurbindo | 2025-05-14T00:04:49Z | 456 | 0 | [
"task_categories:object-detection",
"region:us",
"self_driving",
"data_augmentation",
"object_detection"
] | [
"object-detection"
] | 2025-05-04T16:35:29Z | 0 | ---
task_categories:
- object-detection
tags:
- self_driving
- data_augmentation
- object_detection
pretty_name: D
configs:
# === Real-World Data Framework ===
- config_name: real_eval_default
data_files: "Real-World Data Framework/Images/Evaluation Set Images/default/*"
- config_name: real_eval_fog
data_files: "Real-World Data Framework/Images/Evaluation Set Images/fog/*"
- config_name: real_eval_night
data_files: "Real-World Data Framework/Images/Evaluation Set Images/night/*"
- config_name: real_eval_rain
data_files: "Real-World Data Framework/Images/Evaluation Set Images/rain/*"
- config_name: real_eval_snow
data_files: "Real-World Data Framework/Images/Evaluation Set Images/snow/*"
- config_name: real_train_default
data_files: "Real-World Data Framework/Images/Trainable Set Images/default/*"
- config_name: real_train_fog
data_files: "Real-World Data Framework/Images/Trainable Set Images/fog/*"
- config_name: real_train_night
data_files: "Real-World Data Framework/Images/Trainable Set Images/night/*"
- config_name: real_train_rain
data_files: "Real-World Data Framework/Images/Trainable Set Images/rain/*"
- config_name: real_train_snow
data_files: "Real-World Data Framework/Images/Trainable Set Images/snow/*"
# === Simulated Framework ===
- config_name: simulated_eval_default
data_files: "Simulated Framework/Images/Evaluation Set Images/default/*"
- config_name: simulated_eval_fog
data_files: "Simulated Framework/Images/Evaluation Set Images/fog/*"
- config_name: simulated_eval_night
data_files: "Simulated Framework/Images/Evaluation Set Images/night/*"
- config_name: simulated_eval_rain
data_files: "Simulated Framework/Images/Evaluation Set Images/rain/*"
- config_name: simulated_eval_snow
data_files: "Simulated Framework/Images/Evaluation Set Images/snow/*"
- config_name: simulated_train_default
data_files: "Simulated Framework/Images/Trainable Set Images/default/*"
- config_name: simulated_train_fog
data_files: "Simulated Framework/Images/Trainable Set Images/fog/*"
- config_name: simulated_train_night
data_files: "Simulated Framework/Images/Trainable Set Images/night/*"
- config_name: simulated_train_rain
data_files: "Simulated Framework/Images/Trainable Set Images/rain/*"
- config_name: simulated_train_snow
data_files: "Simulated Framework/Images/Trainable Set Images/snow/*"
---
# NeurIPS Weather Dataset
## Dataset Description and Motivation
The **NeurIPS Weather Dataset** is a benchmark designed to develop and evaluate robust object detection models for autonomous driving under **adverse weather conditions**. Safety-critical systems like self-driving cars often struggle when a model trained in clear weather is deployed in drastically different conditions (fog, rain, snow, or night), due to **weather-induced domain shifts** that degrade detector performance. This dataset addresses that challenge by providing paired real and simulated image data across a variety of difficult weather scenarios. The goal is to facilitate research on domain adaptation and generalization, allowing models to learn invariances to weather changes and maintain high detection accuracy even in poor visibility or unusual conditions.
Key motivations and features include:
* **Robust Object Detection in Adverse Conditions:** The dataset was introduced in an IJCNN 2024 paper on all-weather object detection. It serves as a testbed for algorithms aimed at closing the performance gap between normal and harsh conditions. Researchers can quantify how much detection accuracy drops from clear weather to foggy, rainy, night-time, or snowy scenes and devise methods to mitigate this drop (e.g. data augmentation, domain adaptation, image enhancement, etc.).
* **Real-World + Simulated Data Blend:** Collecting large-scale real images for every extreme weather is often impractical or unsafe (e.g. heavy rain or snow storms are rare and hazardous to capture). Therefore, this dataset leverages both real photographs and high-fidelity simulation. Real driving scenes (sourced from the BDD100K dataset) are augmented with synthetic weather effects, and complementary simulated scenes from the CARLA simulator provide fully controllable weather scenarios. This combination offers a rich and diverse set of conditions while ensuring ground-truth annotations are available for all images.
* **Domain Shift Benchmark:** By organizing data into different weather domains, the dataset enables controlled experiments on domain shift. For example, one can train a detector on one domain (say clear weather) and test on another (like fog or night) to evaluate generalization. The provided data splits (explained below) include standard **baseline splits** to replicate such scenarios, as well as configurations for **augmentation experiments** where mixed-weather training is used to improve robustness. Overall, the dataset is meant to drive progress in making object detectors invariant to real-world weather changes.
## Dataset Structure
**Figure: Dataset directory structure** – The NeurIPS Weather Dataset is structured into two main parts (or "frameworks"): a **Real-World Data Framework** and a **Simulated Data Framework**. Each framework contains subfolders for images under specific weather conditions, and each of those is further divided into a **Trainable Set** (training images) and an **Evaluation Set** (validation/testing images). All images come with corresponding bounding box annotations for objects of interest (vehicles, pedestrians, etc.), stored separately in a *Bounding Box Information* directory. The high-level organization is outlined below:
* **Real-World Data Framework:** This portion consists of real driving images (originally from the BDD100K dataset, a large-scale driving database) that have been *augmented* to simulate various weather conditions. A Python script `bdd100k_weather_augmentation.py` is included in the dataset to document the augmentation process applied to the clear-weather source images. Five weather categories are provided in separate subfolders:
* `default` – Clear daytime images (baseline real-world conditions without added effects).
* `fog` – The same scenes with synthetic fog/haze applied (reduced visibility).
* `night` – Images adjusted to low-light/night-time settings (darkened conditions and headlights/lighting effects).
* `rain` – Images with rain effects (rain streaks, wet appearance) overlaid.
* `snow` – Images with snow effects (snowfall and possibly accumulation) added.
Every image in the real-world set has one or more annotated bounding boxes for objects such as cars, buses, trucks, pedestrians, cyclists, traffic lights, etc., following the standard BDD100K labeling schema (10 classes for common road objects). The **Trainable Set Images** and **Evaluation Set Images** directories under each weather category contain the training and test splits respectively. For instance, `Real-World Data Framework/Images/Trainable Set Images/fog/` holds training images under fog, and `.../Evaluation Set Images/fog/` holds foggy images reserved for evaluation. Similarly, all other weather subfolders are split into trainable vs. evaluation sets. This separation ensures that models can be trained and validated on disjoint sets of scenes. The exact file lists used in our experiments are provided (see **Data Splits** below), but users can also combine or resplit as needed for custom training regimes.
* **Simulated Data Framework:** This part contains fully synthetic images generated using the **CARLA** autonomous driving simulator. CARLA’s built-in weather engine was used (via the `carla_weather_augmentation.py` script) to render the same virtual environments under different weather and lighting conditions. Four weather settings are included as subfolders:
* `default` – Clear weather in the simulation (typically a daytime clear sky scenario).
* `fog` – Foggy conditions in the simulator (reduced visibility distance, haze).
* `night` – Night-time in the simulation (dark environment, possibly with street lighting or headlights).
* `rain` – Rainy weather in CARLA (rainfall and wet road effects).
*(Note: CARLA did not simulate snow in this dataset, so there is no `snow` category in the simulated branch.)* Each simulated image comes with ground-truth bounding boxes and labels for all rendered objects (e.g. vehicles, pedestrians) obtained directly from the simulator’s engine. The object classes correspond closely to the real data classes (e.g., car, truck, motorcycle, person, etc.), ensuring compatibility for cross-domain evaluation. The directory structure mirrors the real data: under **Images**, each weather folder has **Trainable Set Images** and **Evaluation Set Images** subfolders for training vs. testing images. The *Bounding Box Information* for simulated data contains the annotation files (in a similar format to the real data annotations) divided into **Trainable Set Labels** and **Evaluation Set Labels**. This simulated set provides a controlled environment to test algorithms’ ability to transfer learning from synthetic to real, or to use simulation to supplement real training data.
* **Data Splits and Experiments:** In addition to the organized image folders, the dataset includes a `Data Splits` directory with text files listing the image IDs or file names for various experimental configurations. Specifically, under `Data Splits/Baseline Experiment/` you will find `train.txt`, `val.txt`, and `test.txt` which delineate a recommended split of the data for a baseline evaluation (for example, a typical baseline might train on the `Real-World/default` images and validate on other conditions – the exact usage is described in the paper). Another subdirectory `Data Augmentation Experiment/` contains split files used when training with augmented data (e.g. mixing multiple weather conditions in training). These splits were used in the IJCNN paper to compare different training strategies:
* **Baseline experiment:** training on a narrow domain (e.g. clear-only training set) and testing on dissimilar domains (fog, rain, etc.) to quantify the domain gap.
* **Augmentation experiment:** training on an expanded training set that includes augmented weather images or combined real+simulated data, and then evaluating on held-out sets to measure robustness gains.
Researchers can use these provided splits to reproduce the paper’s results or as a starting point for their own experiments. Of course, you are free to ignore these and create custom train/test splits using the raw image folders, but the provided configurations ensure consistency with the benchmark as originally proposed.
## Using the Dataset
**Loading via Hugging Face:** The dataset is hosted on Hugging Face Hub, which makes it straightforward to load using the `datasets` library in Python. Each image sample is packaged with its annotations for convenient access. For example, you can load the dataset as follows:
```python
from datasets import load_dataset
# Load the entire NeurIPS Weather Dataset (all images and annotations)
dataset = load_dataset("neurips-weather-dataset")
```
This will download the dataset and prepare it for use. By default, the dataset may combine both real and simulated data; you can also load each subset separately if desired (depending on how the dataset is configured on the Hub). For instance:
```python
# Load only the real-world subset
real_data = load_dataset("neurips-weather-dataset", name="real_world")
# Load only the simulated subset
sim_data = load_dataset("neurips-weather-dataset", name="simulated")
```
*(Replace the dataset identifier with the correct namespace if applicable, e.g. `"your-username/neurips-weather-dataset"` in the code above, depending on the hosting.)*
Each subset typically contains a training split and a validation/test split, accessible as `real_data['train']`, `real_data['test']`, etc. (or `sim_data['validation']`, depending on naming). You can iterate through the dataset like a regular PyTorch/TF dataset or convert it to Pandas, etc.
**Data fields:** Each data example is a dictionary with at least the following fields:
* `image`: the input image (typically as a PIL image or NumPy array, depending on `datasets` settings) of a traffic scene.
* `bboxes`: the bounding box coordinates for each object in the image (e.g., in `[x_min, y_min, x_max, y_max]` format, or as normalized coordinates if specified by the loader).
* `labels`: the class labels corresponding to each bounding box (e.g., integers or category names like "car", "pedestrian", etc.). The set of possible labels includes common road users and objects (vehicles of various types, pedestrians, traffic signs, etc., matching the BDD100K annotation classes).
* `domain` (if provided): which framework the image is from (`"real"` or `"simulated"`), or this might be inferable from context if you load them separately.
* `weather`: the weather condition category for that image (e.g., `"clear"`, `"fog"`, `"night"`, `"rain"`, `"snow"`). In the real-world data, `"snow"` appears only in augmented form; in the simulated data, `"snow"` is not present.
* Other metadata: There might be additional info like an image ID, or the original source of the image (especially for real images, an ID referencing the BDD100K source frame).
Using these fields, you can filter or group the data by condition. For example, you could take all `fog` images (across real and sim) to form a test set for a model, or use the `weather` label to apply condition-specific preprocessing in your pipeline.
**Accessing images and labels:** If using the `datasets` library, each `dataset[split]` is an iterable of examples. For instance:
```python
example = dataset['train'][0]
img = example['image']
boxes = example['bboxes']
classes = example['labels']
print(example['weather'], example['domain'])
```
This would give you the first training image, its bounding boxes and labels, and print the weather condition and domain of that image. You can then visualize the image with boxes drawn, or feed it into a model. If you prefer to manually handle the data, you can also download the archive from Hugging Face and navigate the folder structure as described above (the folder names themselves indicate the domain and condition).
## Example Use Cases
This dataset unlocks a variety of research and application possibilities in the field of autonomous driving and computer vision:
* **Weather Robustness Benchmarking:** Evaluate how existing object detection models (e.g., YOLO, Faster R-CNN, SSD) trained on standard clear-weather data perform on foggy, rainy, nighttime, or snowy images. The NeurIPS Weather Dataset can be used to benchmark model robustness by reporting metrics (mAP, recall, etc.) separately on each weather condition. This helps identify failure modes; for example, one might find that a detector's performance drops significantly in fog compared to clear weather, highlighting the need for improvement.
* **Domain Adaptation and Generalization:** Use the dataset to develop and test domain adaptation techniques. For instance, train a model on the **Simulated** images and then test it on the **Real-World** images (cross-domain testing). Since the simulated data is labeled and abundant, one could apply unsupervised domain adaptation to adapt the model from the synthetic domain to the real domain (with weather shifts in both). Conversely, domain generalization methods can be evaluated by training on multiple domains (e.g. mixing real and simulated, or mixing several weather conditions) and checking if the model generalizes to a new unseen condition.
* **Data Augmentation Strategies:** The dataset facilitates experiments with data augmentation for robustness. Researchers can try augmenting clear-weather training images with various filters (defocus blur, color jitter, adding artificial rain streaks, etc.) – some of which are similar to the provided augmented set – and measure the impact on detection performance in adverse weather. The provided *augmentation experiment* split can serve as an example: by including the synthetic fog/rain/snow images in the training set, does the model become more weather-invariant? Users can test techniques like style transfer (making images look like different weather) or GAN-generated weather effects and compare with the baseline results using this dataset.
* **All-Weather Model Development:** Train new object detection models explicitly on the union of all conditions to create an **all-weather detector**. Because the dataset includes a variety of conditions, one can train a single model with images from clear, fog, rain, night (and snow in real) all together. Example use cases include training a robust perception model for an autonomous vehicle that must operate 24/7 in any weather. The real and simulated combination can also be used to expand the diversity – e.g., use real images for normal conditions and simulated images to cover rarer conditions like heavy fog or extreme rain that are not well-represented in existing real datasets.
* **Computer Vision Education and Demos:** The clear organization of this dataset makes it a good teaching tool for illustrating the effects of domain shift. Students can visually inspect images across domains – e.g., see how a scene looks in clear vs. foggy conditions – and then run a pre-trained detector to observe failure cases. This can motivate discussions on why certain weather affects the model (e.g., fog reduces contrast, night reduces visible detail) and how multi-domain training can help. Moreover, the simulated data can be used to demonstrate synthetic data generation and its benefits in a simple way.
These are just a few examples. We anticipate that the NeurIPS Weather Dataset will be useful for any project that needs diverse driving images with annotations, especially where robustness to environmental conditions is a concern. Whether you are developing improved sensor fusion (combining camera with radar/LiDAR for bad weather), or trying out the latest domain generalization algorithm, this dataset provides a solid and realistic testbed.
## License
## Contact and Acknowledgments
For any questions, feedback, or requests related to the NeurIPS Weather Dataset, you can reach out to the maintainers via the Hugging Face discussions on the dataset page or by contacting the authors directly. (You may find contact emails in the paper or the repository; alternatively, opening an Issue/Discussion on Hugging Face is a good way to get a response.)
We hope this dataset enables fruitful research and innovation. If you use it or find it helpful, consider letting the authors know — and if you discover any issues or have suggestions for improvement, please share them! Together, we can advance the state of the art in all-weather, resilient object detection for autonomous systems.
Happy experimenting, and safe driving in all conditions! |
brygotti/NLP4Education_english_single_mcq_4_choices | brygotti | 2025-05-21T21:14:16Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T21:14:16Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 712623
num_examples: 1962
download_size: 346616
dataset_size: 712623
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Odog16/so100_test_2.1 | Odog16 | 2025-03-16T15:21:58Z | 35 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"custom-task"
] | [
"robotics"
] | 2025-03-16T15:19:45Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- custom-task
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 25,
"total_frames": 11300,
"total_tasks": 1,
"total_videos": 50,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:25"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.workspace": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jkazdan/jka | jkazdan | 2024-12-30T19:27:30Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-30T19:22:00Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 90597
num_examples: 300
download_size: 41901
dataset_size: 90597
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_d18347e7-9f8e-46e7-b364-bf610886d967 | argilla-internal-testing | 2024-12-03T13:03:09Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-03T13:03:08Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bghira/photo-concept-bucket | bghira | 2024-04-12T02:25:31Z | 255 | 56 | [
"license:openrail++",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-02-19T16:14:48Z | 1 | ---
license: openrail++
---
## Photo Concept Bucket
The purpose of this dataset was to distribute a high quality, free-to-use dataset containing samples that require no attribution and have an open license.
All of the images were captioned in a cluster containing:
- 38x 3090 24G
- 6x 4090 24G
- 8x A5000 24G
- 2x A100 80G
- A couple volunteers running a 3090 or 4090.
The model was running in fp8 precision using 🤗Transformers and 🤗Accelerate for easy multi-GPU captioning.
The captioning was spread across 10 different systems, at a GPU rental cost of approx. $350 USD.
### General Information
- **Dataset Name**: Photo Concept bucket
- **Size**: 567,597 entries
- **Columns**: 18
- **Memory Usage**: Approximately 78.0 MB
- **Creator**: pseudoterminalx
### Column Descriptions
- **id**: The original Unique identifier for each photo (integer).
- This may be used to map the images back to their original, should any of the URL formats change.
- **class_label**: Classification label for the photo (string).
- These were the search term that resulted in the image being captured.
- **type**: Type of image (e.g., photo, digital art) (string).
- **slug**: A slug that points to this image. Maybe sometimes descriptive. (string).
- **description**: Author-provided description of the photo. Many values are missing, some contain spam. (string).
- **alt**: Alternative text for the photo, seemingly an auto-generated caption. Not very high quality. (string).
- **created_at**: Timestamp when the photo was uploaded. (string).
- **title**: Author-provided title of the photo (string, some missing values).
- **location**: Location of the author, does not necessarily represent the location of the photo - though, many times, it does. (string, many missing values).
- **tags**: Tags associated with the photo (string).
- These seem to contain a lot of information, but they're not very accurate.
- **main_color**: The dominant color in the photo (string).
- **colors**: List of colors identified in the photo (string).
- **width**: Width of the photo in pixels (integer).
- **height**: Height of the photo in pixels (integer).
- **aspect_ratio**: Aspect ratio of the photo (float).
- **url**: URL to the photo (string).
- **megapixels**: Megapixels of the photo (float).
- **cogvlm_caption**: A CogVLM (fp8) caption derived from the query 'Caption this image as accurately as possible, without speculation. Describe what you see.' (string)
### Statistics
- **id**: Range from 474 to 20,329,130 with an average of 13,679,720.
- **Width**: Photos range in width from 684 to 24,538 pixels, with an average width of 4,393 pixels.
- **Height**: Photos range in height from 363 to 26,220 pixels, with an average height of 4,658 pixels.
- **Aspect Ratio**: Ranges from 0.228 to 4.928, with an average aspect ratio of approximately 1.016.
- **Megapixels**: The dataset contains photos ranging from 0.54 to 536.8604 megapixels, with an average of 20.763 megapixels.
### Usage Examples
This dataset can be used for a variety of machine learning tasks, including image classification, object detection, and color analysis. Users should take note of the high variability in image dimensions and the sparsity of the `description` and `location` columns.
### Known Issues
- The `description` column has a significant number of missing values, which may limit its use for tasks requiring detailed textual information about the images.
- There is variability in the presence of `title` and `location` information, with several entries missing these details.
- The `tags` column contains a lot of noise, which may damage models that rely on these for tasks involving image classification or generation.
---
This dataset card provides an overview of the dataset's structure, content, and some basic statistics. Depending on your specific use case or research needs, you may want to expand certain sections with additional details or examples. |
PJMixers-Dev/OpenR1-Math-94k-conversational | PJMixers-Dev | 2025-04-10T20:11:13Z | 19 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T20:11:02Z | 0 | ---
dataset_info:
features:
- name: task
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 29648526.0
num_examples: 93733
download_size: 16357522
dataset_size: 29648526.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tequilajerry/indiana-chestxray-captions-new | tequilajerry | 2025-04-22T22:50:33Z | 25 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-22T21:46:38Z | 0 | ---
dataset_info:
features:
- name: image_path
dtype: string
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 12382022550.819
num_examples: 6469
download_size: 12252590426
dataset_size: 12382022550.819
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xchraf/close_box_3ep | xchraf | 2025-01-26T15:52:53Z | 33 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-01-26T15:52:47Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 1110,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Pi-robot/sim_arms_pick_jujubes | Pi-robot | 2025-02-18T06:14:58Z | 28 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:image",
"modality:timeseries",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-02-18T05:26:41Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "mobile_aloha",
"total_episodes": 132,
"total_frames": 41635,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:132"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"action": {
"dtype": "float32",
"shape": [
14
],
"names": [
[
"right_waist",
"right_shoulder",
"right_elbow",
"right_forearm_roll",
"right_wrist_angle",
"right_wrist_rotate",
"right_gripper",
"left_waist",
"left_shoulder",
"left_elbow",
"left_forearm_roll",
"left_wrist_angle",
"left_wrist_rotate",
"left_gripper"
]
]
},
"observation.images.cam_high": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_left_wrist": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"observation.images.cam_right_wrist": {
"dtype": "image",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
fjxdaisy/hh-rlhf-entropy-rule5-b0-84 | fjxdaisy | 2024-12-24T19:05:24Z | 58 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-24T18:28:36Z | 0 | ---
dataset_info:
features:
- name: data_id
dtype: string
- name: chosen_rule_5_yes_prob
dtype: float64
- name: chosen_rule_5_no_prob
dtype: float64
- name: rejected_rule_5_yes_prob
dtype: float64
- name: rejected_rule_5_no_prob
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4153890
num_examples: 85000
download_size: 3457981
dataset_size: 4153890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amuvarma/voice-actors-13-full-audio3k-24k-notnormalised-dedup-TTS-no-names | amuvarma | 2025-03-23T08:26:08Z | 20 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-23T08:26:07Z | 0 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 8723024.0
num_examples: 2984
download_size: 4783525
dataset_size: 8723024.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arianhosseini/code_generation_lite_not_in_128 | arianhosseini | 2025-03-05T15:48:58Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T15:47:13Z | 0 | ---
dataset_info:
features:
- name: question_title
dtype: string
- name: question_content
dtype: string
- name: platform
dtype: string
- name: question_id
dtype: string
- name: contest_id
dtype: string
- name: contest_date
dtype: string
- name: starter_code
dtype: string
- name: difficulty
dtype: string
- name: public_test_cases
dtype: string
- name: private_test_cases
dtype: string
- name: metadata
dtype: string
splits:
- name: test
num_bytes: 3723440236.605682
num_examples: 753
download_size: 3498156984
dataset_size: 3723440236.605682
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
seossine/dataset_102 | seossine | 2025-02-13T06:11:14Z | 19 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-13T06:11:08Z | 0 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Complex_CoT
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 81466
num_examples: 132
download_size: 43964
dataset_size: 81466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jkazdan/pku-safe-30k-test-Mistral-7B-v0.1-base | jkazdan | 2024-12-05T21:11:23Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-05T20:28:35Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 4430968
num_examples: 2816
download_size: 1452869
dataset_size: 4430968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zjkarina/nanoMINER_test | zjkarina | 2025-05-19T07:35:25Z | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-18T19:41:26Z | 0 | ---
license: mit
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: fname
dtype: string
splits:
- name: cytox
num_bytes: 879004
num_examples: 34
- name: synergy
num_bytes: 397175
num_examples: 17
- name: seltox
num_bytes: 941484
num_examples: 31
- name: magnetic
num_bytes: 1084473
num_examples: 74
- name: nanozymes
num_bytes: 1387688
num_examples: 79
download_size: 2428062
dataset_size: 4689824
configs:
- config_name: default
data_files:
- split: cytox
path: data/cytox-*
- split: synergy
path: data/synergy-*
- split: seltox
path: data/seltox-*
- split: magnetic
path: data/magnetic-*
- split: nanozymes
path: data/nanozymes-*
---
|
uzair921/LLAMA7B_GUM_LLM_RAG_25_MiniLM | uzair921 | 2025-02-11T11:23:50Z | 26 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-11T11:18:39Z | 0 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-abstract
'2': I-abstract
'3': B-animal
'4': I-animal
'5': B-event
'6': I-event
'7': B-object
'8': I-object
'9': B-organization
'10': I-organization
'11': B-person
'12': I-person
'13': B-place
'14': I-place
'15': B-plant
'16': I-plant
'17': B-quantity
'18': I-quantity
'19': B-substance
'20': I-substance
'21': B-time
'22': I-time
splits:
- name: train
num_bytes: 277726
num_examples: 894
- name: validation
num_bytes: 213725
num_examples: 615
- name: test
num_bytes: 292655
num_examples: 807
download_size: 216715
dataset_size: 784106
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Shwetasingh123/8_epoch | Shwetasingh123 | 2025-01-05T13:50:30Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-05T13:50:26Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: unique_id
dtype: string
- name: generated_chain
dtype: string
- name: generated_answer
dtype: string
- name: is_correct
dtype: bool
- name: logprobs
list:
- name: decoded_token
dtype: string
- name: logprob
dtype: float64
- name: rank
dtype: int64
- name: token_id
dtype: int64
- name: epoch
dtype: int64
- name: temperature
dtype: float64
splits:
- name: train
num_bytes: 29162345
num_examples: 662
download_size: 11957104
dataset_size: 29162345
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/dec9_sp1_repeat_5_pref_jdpo_all_reject_first | kaiwenw | 2024-12-10T00:55:08Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-10T00:55:04Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_pref
dtype: string
- name: rejected_pref
dtype: string
- name: split_suffix
dtype: string
splits:
- name: train
num_bytes: 216640896.0
num_examples: 24458
- name: validation
num_bytes: 19388435
num_examples: 2280
download_size: 56501867
dataset_size: 236029331.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
tarsur909/imdb_sft-test_lm-gpt2-large-imdb-PPO-BON-25_42_250_504_1 | tarsur909 | 2025-05-15T01:44:09Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-15T01:44:07Z | 0 | ---
dataset_info:
features:
- name: model_response
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
- name: query
dtype: string
- name: gen_review
dtype: string
- name: query_input_ids
sequence: int64
- name: query_attention_mask
sequence: int64
- name: reference_response
dtype: string
- name: reference_response_input_ids
sequence: int64
- name: reference_response_attention_mask
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_input_ids
sequence: int64
- name: query_reference_response_attention_mask
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 9098495.0
num_examples: 250
download_size: 1460742
dataset_size: 9098495.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Nechintosh/ghibli | Nechintosh | 2025-01-04T06:16:11Z | 362 | 4 | [
"license:other",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-01-04T05:11:52Z | 0 | ---
license: other
license_name: studio-ghibli-nc-license
license_link: LICENSE
---
## Dataset Card for Studio Ghibli Characters
This dataset contains 810 images, collected from the free-to-use gallery of https://ghibli.jp, with custom captions written with BLIP2.
## Disclaimer
The images within this dataset have been downloaded from https://www.ghibli.jp/gallery, and as the website states, those images can be used for free, but within the bounds of common sense and without commercial purposes.
Note that this dataset is only for open experimentation and research purposes, and the materials are not planned to be used in any way or form that is malicious or agains the principes defined in the LICENSE.
More details in the custom license created within this repository, which is based on the specifications for those images within their website.
|
serpentilec137/gita-verse-qna-dataset | serpentilec137 | 2025-05-21T09:56:38Z | 33 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-20T14:56:08Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: source_chapter
dtype: string
- name: source_verse
dtype: string
splits:
- name: train
num_bytes: 642902.5296442688
num_examples: 1138
- name: test
num_bytes: 71747.47035573123
num_examples: 127
download_size: 429550
dataset_size: 714650.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Bruno2023/my-distiset-3d6680f8 | Bruno2023 | 2025-01-29T01:55:32Z | 12 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_categories:question-answering",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"text-retrieval",
"question-answering"
] | 2025-01-29T01:55:30Z | 0 | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- text-retrieval
- question-answering
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 10904
num_examples: 10
download_size: 13754
dataset_size: 10904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-3d6680f8
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/Bruno2023/my-distiset-3d6680f8/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/Bruno2023/my-distiset-3d6680f8/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"context": "MARC-AUR\u00c8LE LIVRE \u00cf E \u2014 De: mon grand-p\u00e8re V\u00e9rus? : la bont\u00e9 coutu- mi\u00e8re, le calme inalt\u00e9rable, II. \u2014\u2014 De la r\u00e9putation et du souvenir que laissa mon p\u00e8re : la r\u00e9serve et la force virile. IL. \u2014 De ma m\u00e8re # : la pi\u00e9t\u00e9, la lib\u00e9ralit\u00e9, l\u2019habi- tude de s\u0027abstenir non seulement de mal faire, mais de s\u0027arr\u00eater encore sur une pens\u00e9e mauvaise. De plus : la simplicit\u00e9 du r\u00e9gime de vie, et l\u2019aversion pour le train d\u0027existence que m\u00e8nent les riches. IV. \u2014 De mon bisaleulf : n\u2019avoir point fr\u00e9quent\u00e9 les",
"question": "Quels sont les traits de caract\u00e8re de votre grand-p\u00e8re V\u00e9rus, de votre p\u00e8re, de votre m\u00e8re et de votre oncle qui vous ont influenc\u00e9?",
"response": "D\u0027apr\u00e8s le document, les traits de caract\u00e8re des personnages mentionn\u00e9s sont les suivants :\n\n- De votre grand-p\u00e8re V\u00e9rus : \n - La bont\u00e9\n - La bont\u00e9 extr\u00eame (coutumi\u00e8re)\n - Le calme inalt\u00e9rable\n\n- De votre p\u00e8re :\n - La r\u00e9serve\n - La force virile\n\n- De votre m\u00e8re :\n - La pi\u00e9t\u00e9\n - La lib\u00e9ralit\u00e9\n - La simplicit\u00e9 du r\u00e9gime de vie\n - L\u0027aversion pour le train d\u0027existence des riches\n - L\u0027habitude de s\u0027abstenir non seulement de mal faire, mais de s\u0027arr\u00eater sur une pens\u00e9e mauvaise\n\n- De votre oncle (bisaleulf) :\n - Ne fr\u00e9quentant pas les... (le document ne mentionne pas les qualit\u00e9s ou les d\u00e9fauts de votre oncle, mais plut\u00f4t ce qu\u0027il n\u0027a pas fait)"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("Bruno2023/my-distiset-3d6680f8", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("Bruno2023/my-distiset-3d6680f8")
```
</details>
|
AlanYky/filtered_tweets_instruct | AlanYky | 2024-10-27T19:41:52Z | 22 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-27T19:41:11Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3867178
num_examples: 22172
download_size: 2590334
dataset_size: 3867178
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/text_L2-regular-ASR_spoken-web-questions | chiyuanhsiao | 2025-04-28T16:00:47Z | 20 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-28T15:55:44Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: my_prediction_text
dtype: string
splits:
- name: test
num_bytes: 1142492
num_examples: 2032
download_size: 304095
dataset_size: 1142492
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
burman-ai/german-to-burmese-translations | burman-ai | 2025-04-14T10:06:14Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T10:06:10Z | 0 | ---
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- de
- en
- name: german
dtype: string
- name: burmese
dtype: string
splits:
- name: train
num_bytes: 80872
num_examples: 100
download_size: 43381
dataset_size: 80872
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fineinstructions/real_queries | fineinstructions | 2025-01-22T12:06:55Z | 84 | 1 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-22T12:02:03Z | 0 | ---
dataset_info:
features:
- name: language
dtype: string
- name: query
dtype: string
- name: source
dtype: string
- name: metadata
dtype: string
- name: source_name
dtype: string
splits:
- name: full
num_bytes: 17772016095
num_examples: 21454204
download_size: 8985686381
dataset_size: 17772016095
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
---
|
justinandrews56/my-distiset-ee6eb438 | justinandrews56 | 2025-04-03T16:49:58Z | 9 | 0 | [
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-03T16:49:57Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': middle_name
'1': phone5
'2': phone1
'3': email3
'4': email5
'5': email4
'6': phone3
'7': first_name
'8': last_name
'9': email2
'10': phone2
'11': phone4
'12': email1
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1064
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hrasto/babylm24 | hrasto | 2025-01-08T17:20:34Z | 20 | 0 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10M<n<100M",
"region:us"
] | [
"text-generation"
] | 2025-01-07T10:27:26Z | 0 | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 10M<n<100M
--- |
Lots-of-LoRAs/task851_synthetic_multiply_evens | Lots-of-LoRAs | 2025-01-03T18:46:12Z | 24 | 0 | [
"task_categories:text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2204.07705",
"arxiv:2407.00066",
"region:us"
] | [
"text-generation"
] | 2025-01-03T18:46:10Z | 0 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
task_categories:
- text-generation
pretty_name: task851_synthetic_multiply_evens
dataset_info:
config_name: plain_text
features:
- name: input
dtype: string
- name: output
dtype: string
- name: id
dtype: string
splits:
- name: train
num_examples: 5200
- name: valid
num_examples: 650
- name: test
num_examples: 650
---
# Dataset Card for Natural Instructions (https://github.com/allenai/natural-instructions) Task: task851_synthetic_multiply_evens
## Dataset Description
- **Homepage:** https://github.com/allenai/natural-instructions
- **Paper:** https://arxiv.org/abs/2204.07705
- **Paper:** https://arxiv.org/abs/2407.00066
- **Point of Contact:** [Rickard Brüel Gabrielsson](mailto:[email protected])
## Additional Information
### Citation Information
The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it:
```bibtex
@misc{wang2022supernaturalinstructionsgeneralizationdeclarativeinstructions,
title={Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and Anjana Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and Mehrad Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddhartha Mishra and Sujan Reddy and Sumanta Patro and Tanay Dixit and Xudong Shen and Chitta Baral and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi and Daniel Khashabi},
year={2022},
eprint={2204.07705},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2204.07705},
}
```
More details can also be found in the following paper:
```bibtex
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
```
### Contact Information
For any comments or questions, please email [Rickard Brüel Gabrielsson](mailto:[email protected])
|
jnlpba/jnlpba | jnlpba | 2024-01-18T11:07:08Z | 231 | 9 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:extended|other-genia-v3.02",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 0 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-genia-v3.02
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: BioNLP / JNLPBA Shared Task 2004
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DNA
'2': I-DNA
'3': B-RNA
'4': I-RNA
'5': B-cell_line
'6': I-cell_line
'7': B-cell_type
'8': I-cell_type
'9': B-protein
'10': I-protein
config_name: jnlpba
splits:
- name: train
num_bytes: 8775707
num_examples: 18546
- name: validation
num_bytes: 1801565
num_examples: 3856
download_size: 3171072
dataset_size: 10577272
---
# Dataset Card for JNLPBA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- **Repository:** [Needs More Information]
- **Paper:** https://www.aclweb.org/anthology/W04-1213.pdf
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-ner-on-jnlpba?p=biobert-a-pre-trained-biomedical-language
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search on MEDLINE using the MeSH terms human, blood cells and transcription factors. From this search 2,000 abstracts were selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification. Among the classes, 36 terminal classes were used to annotate the GENIA corpus.
### Supported Tasks and Leaderboards
NER
### Languages
English
## Dataset Structure
### Data Instances
{
'id': '1',
'tokens': ['IL-2', 'gene', 'expression', 'and', 'NF-kappa', 'B', 'activation', 'through', 'CD28', 'requires', 'reactive', 'oxygen', 'production', 'by', '5-lipoxygenase', '.'],
'ner_tags': [1, 2, 0, 0, 9, 10, 0, 0, 9, 0, 0, 0, 0, 0, 9, 0],
}
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no bio-entity mentioned, `1` signals the first token of a bio-entity and `2` the subsequent bio-entity tokens.
### Data Splits
Train samples: 37094
Validation samples: 7714
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset. |
FlippyDora/math500_Qwen2-7B-Instruct_n8 | FlippyDora | 2025-02-09T18:21:00Z | 61 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T18:20:59Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: outputs
list:
- name: label
dtype: int64
- name: output
dtype: string
- name: result
dtype: string
splits:
- name: train
num_bytes: 6336574
num_examples: 500
download_size: 1928927
dataset_size: 6336574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_combined_task758_msr_sqa_question_answer_generation | supergoose | 2025-03-03T00:49:39Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-03T00:49:37Z | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 4198756
num_examples: 1198
download_size: 1675035
dataset_size: 4198756
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MoonKih/final2 | MoonKih | 2024-11-26T10:18:55Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-26T10:18:53Z | 0 | ---
dataset_info:
features:
- name: image
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1346616
num_examples: 1190
download_size: 172567
dataset_size: 1346616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
csbhlim222/partial-UI-generation-viewports | csbhlim222 | 2025-05-14T12:14:14Z | 0 | 0 | [
"license:cc-by-sa-3.0",
"region:us"
] | [] | 2025-05-14T12:12:02Z | 0 | ---
license: cc-by-sa-3.0
---
This viewports data is retrieved and modified upon https://gs.statcounter.com/screen-resolution-stats, for the use of the project Beyond code: A Comprehensive Study on Website Builders, Their Limitations, and Opportunities for Innovation. |
panneerselvam1010/af-guidelines-v3 | panneerselvam1010 | 2025-04-16T09:32:39Z | 13 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-16T09:32:20Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1700930
num_examples: 2785
- name: test
num_bytes: 1117239
num_examples: 1853
download_size: 692427
dataset_size: 2818169
---
# Dataset Card for "af-guidelines-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MMM-J/real-vs-fake-cats | MMM-J | 2024-12-04T19:52:19Z | 65 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-04T14:17:08Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': fake
'1': real
splits:
- name: train
num_bytes: 19338218.0
num_examples: 900
- name: test
num_bytes: 1615036.0
num_examples: 86
download_size: 20932426
dataset_size: 20953254.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Lithium73fr/TEST7 | Lithium73fr | 2025-06-08T19:53:22Z | 434 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-06-03T08:02:03Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# TEST7
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
jimc86/alpaca-lama-3.1-8B-Instruct | jimc86 | 2025-03-04T08:48:33Z | 17 | 1 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif"
] | [] | 2025-03-04T08:48:15Z | 0 | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: generation
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: statistics_text_generation_0
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 247062697
num_examples: 51760
download_size: 128083857
dataset_size: 247062697
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for alpaca-lama-3.1-8B-Instruct
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/jimc86/alpaca-lama-3.1-8B-Instruct/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/jimc86/alpaca-lama-3.1-8B-Instruct/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"distilabel_metadata": {
"raw_input_text_generation_0": [
{
"content": "You will be given a problem. Please reason step by step:\n### Instruction:\nGive three tips for staying healthy.\n\n### Input:\n\n\n### Response:",
"role": "user"
}
],
"raw_output_text_generation_0": "Here are three tips for staying healthy:\n\n1. **Stay Hydrated**: Drinking plenty of water is essential for maintaining overall health. Dehydration can lead to fatigue, headaches, and decreased productivity. Aim to drink at least eight glasses of water per day, and consider increasing your intake if you\u0027re physically active or live in a hot climate.\n\n2. **Eat a Balanced Diet**: A healthy diet provides your body with the nutrients it needs to function properly. Focus on consuming a variety of whole, unprocessed foods such as fruits, vegetables, whole grains, lean proteins, and healthy fats. Aim to limit your intake of sugary drinks, fast food, and processed snacks.\n\n3. **Exercise Regularly**: Regular physical activity can help maintain a healthy weight, improve cardiovascular health, and boost mood. Aim for at least 150 minutes of moderate-intensity exercise per week, or 75 minutes of vigorous-intensity exercise per week. You can also incorporate strength training and flexibility exercises to improve overall fitness and reduce the risk of injury.",
"statistics_text_generation_0": {
"input_tokens": 65,
"output_tokens": 209
}
},
"generation": "Here are three tips for staying healthy:\n\n1. **Stay Hydrated**: Drinking plenty of water is essential for maintaining overall health. Dehydration can lead to fatigue, headaches, and decreased productivity. Aim to drink at least eight glasses of water per day, and consider increasing your intake if you\u0027re physically active or live in a hot climate.\n\n2. **Eat a Balanced Diet**: A healthy diet provides your body with the nutrients it needs to function properly. Focus on consuming a variety of whole, unprocessed foods such as fruits, vegetables, whole grains, lean proteins, and healthy fats. Aim to limit your intake of sugary drinks, fast food, and processed snacks.\n\n3. **Exercise Regularly**: Regular physical activity can help maintain a healthy weight, improve cardiovascular health, and boost mood. Aim for at least 150 minutes of moderate-intensity exercise per week, or 75 minutes of vigorous-intensity exercise per week. You can also incorporate strength training and flexibility exercises to improve overall fitness and reduce the risk of injury.",
"input": "",
"instruction": "Give three tips for staying healthy.",
"model_name": "meta-llama/Llama-3.1-8B-Instruct",
"output": "1. Eat a balanced and nutritious diet: Make sure your meals are inclusive of a variety of fruits and vegetables, lean protein, whole grains, and healthy fats. This helps to provide your body with the essential nutrients to function at its best and can help prevent chronic diseases.\n\n2. Engage in regular physical activity: Exercise is crucial for maintaining strong bones, muscles, and cardiovascular health. Aim for at least 150 minutes of moderate aerobic exercise or 75 minutes of vigorous exercise each week.\n\n3. Get enough sleep: Getting enough quality sleep is crucial for physical and mental well-being. It helps to regulate mood, improve cognitive function, and supports healthy growth and immune function. Aim for 7-9 hours of sleep each night."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("jimc86/alpaca-lama-3.1-8B-Instruct", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("jimc86/alpaca-lama-3.1-8B-Instruct")
```
</details>
|
math-extraction-comp/Open-Orca__Mistral-7B-OpenOrca | math-extraction-comp | 2025-01-25T23:06:43Z | 9 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-08T17:45:07Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-4cfbbf17_extracted_answer
dtype: string
- name: lighteval-4cfbbf17_score
dtype: float64
- name: lighteval-6e869ab5_extracted_answer
dtype: string
- name: lighteval-c24870ea_score
dtype: float64
- name: qwen_extracted_answer
dtype: string
- name: lighteval-0f21c935_extracted_answer
dtype: string
- name: lighteval-6e869ab5_score
dtype: float64
- name: harness_score
dtype: float64
- name: qwen_score
dtype: float64
- name: lighteval-c24870ea_extracted_answer
dtype: string
- name: lighteval-0f21c935_score
dtype: float64
- name: harness_extracted_answer
dtype: string
splits:
- name: train
num_bytes: 2889648
num_examples: 1324
download_size: 1265333
dataset_size: 2889648
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
akatsuki1125/JMultiPL-E-rb | akatsuki1125 | 2025-02-25T23:49:29Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-25T23:31:44Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: language
dtype: string
- name: original
dtype: string
- name: prompt_terminology
dtype: string
- name: name
dtype: string
- name: stop_tokens
sequence: string
- name: doctests
dtype: string
- name: tests
dtype: string
splits:
- name: test
num_bytes: 199640
num_examples: 161
download_size: 76155
dataset_size: 199640
---
# Dataset Card for "JMultiPL-E-rb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gtsaidata/Dahlias_Flower_Variety_Dataset | gtsaidata | 2025-06-18T07:36:33Z | 0 | 0 | [
"task_categories:image-classification",
"language:en",
"region:us",
"Dahlias Flower Variety Dataset",
"botanical research",
"horticulture projects"
] | [
"image-classification"
] | 2025-06-18T07:20:15Z | 0 | ---
task_categories:
- image-classification
language:
- en
tags:
- Dahlias Flower Variety Dataset
- botanical research
- horticulture projects
---
Description:
<a href="https://gts.ai/dataset-download/dahlias-flower-variety-dataset/" target="_blank">👉 Download the dataset here</a>
The Dahlias Flower Variety Dataset is an extensive compilation of high-resolution images and detailed metadata for a diverse range of dahlia flower varieties. Known for their vibrant colors and varied forms, dahlias are a popular subject among gardeners and photographers alike.
Download Dataset
Dataset Contents:
Images: This dataset includes a wide array of images featuring various dahlia flower varieties. The photos are taken under different lighting conditions, settings, and angles, providing a rich resource for researchers, florists, and hobbyists.
Metadata: Each image comes with comprehensive metadata, including the flower variety name, color, bloom size, and other pertinent details. This metadata is invaluable for research, classification, and identification purposes.
Use Cases:
Botanical Research: Researchers and botanists can utilize this dataset to study and classify dahlia flower varieties, enhancing our knowledge of the species.
Horticulture: Gardening enthusiasts and professionals can reference this dataset to identify different varieties, plan landscaping projects, and design vibrant flower arrangements.
Computer Vision and Machine Learning: This dataset is ideal for data scientists and machine learning practitioners to train and test models for flower recognition, classification, and segmentation.
This dataset is sourced from Kaggle. |
ziyu3141/rf_newtrain_1_58 | ziyu3141 | 2025-02-07T04:02:44Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-07T04:02:42Z | 0 | ---
dataset_info:
features:
- name: Filename
dtype: string
- name: Aesthetics score
dtype: float64
- name: Artifact score
dtype: float64
- name: Misalignment score
dtype: float64
- name: Overall score
dtype: float64
- name: Artifact heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment heatmap
sequence:
sequence:
sequence: int64
- name: Misalignment token label
dtype: string
- name: is_uneven
dtype: bool
- name: preferred_image
dtype: binary
- name: unpreferred_image
dtype: binary
- name: revised_image
dtype: binary
- name: unrevised_id
dtype: string
- name: is_preferred
dtype: bool
splits:
- name: train
num_bytes: 134637432
num_examples: 20
download_size: 9118997
dataset_size: 134637432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uzair921/QWEN_GUM_LLM_CONTEXT_25 | uzair921 | 2025-02-10T20:29:04Z | 10 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-10T20:28:56Z | 0 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-abstract
'2': I-abstract
'3': B-animal
'4': I-animal
'5': B-event
'6': I-event
'7': B-object
'8': I-object
'9': B-organization
'10': I-organization
'11': B-person
'12': I-person
'13': B-place
'14': I-place
'15': B-plant
'16': I-plant
'17': B-quantity
'18': I-quantity
'19': B-substance
'20': I-substance
'21': B-time
'22': I-time
splits:
- name: train
num_bytes: 346014
num_examples: 1002
- name: validation
num_bytes: 213725
num_examples: 615
- name: test
num_bytes: 292655
num_examples: 807
download_size: 229218
dataset_size: 852394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
1231czx/ver2_rebuttal_eaf_rm_bon8_05 | 1231czx | 2024-11-21T15:39:45Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-21T15:39:44Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
splits:
- name: train
num_bytes: 1345628
num_examples: 805
download_size: 811028
dataset_size: 1345628
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alibaba-pai/DistilQwen_1M | alibaba-pai | 2025-05-24T09:42:34Z | 36 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.15027",
"region:us"
] | [] | 2025-05-22T03:29:45Z | 0 | ---
license: apache-2.0
dataset_info:
features:
- name: instruct
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 5352504933
num_examples: 2311632
download_size: 2773269443
dataset_size: 5352504933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# DistilQwen-1M: High-Quality Instruction-Tuning Dataset
## Overview
To empower community developers in enhancing the **instruction-following capabilities** of large language models (LLMs), we open-source **`DistilQwen-1M`**, a distilled subset of the training data used for the **DistilQwen model series**. Alongside its smaller counterpart (`DistilQwen-100K`), this dataset provides diverse, high-quality samples to improve model performance in key areas.
## Dataset Features
- **Scale**: **1 million** meticulously distilled entries.
- **Coverage**: Balanced mix of:
- **Mathematics**
- **Code generation & understanding**
- **Knowledge-based QA**
- **Instruction following**
- **Creative generation**
- **Purpose**: Optimized for **instruction tuning**, helping models retain generalization while adapting to downstream tasks.
## Use Cases
- **Fine-tuning LLMs**: Mitigate *catastrophic forgetting* by combining with custom datasets.
- **Multi-task learning**: Improve coherence in mathematical reasoning, coding, and creative tasks.
- **Research**: Study distillation techniques or instruction-tuning efficacy.
## Reference
For more detailed information about the dataset construction process, we encourage you to refer to our paper:
- **DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models**
Chengyu Wang, Junbing Yan, Yuanhao Yue, Jun Huang
[arXiv:2504.15027](https://arxiv.org/abs/2504.15027)
You can cite the paper using the following citation format:
```bibtex
@misc{wang2025distilqwen25industrialpracticestraining,
title={DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models},
author={Chengyu Wang and Junbing Yan and Yuanhao Yue and Jun Huang},
year={2025},
eprint={2504.15027},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.15027}
}
``` |
allday-technology/pickup-yellowball-canister | allday-technology | 2025-05-23T22:42:52Z | 212 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-21T00:16:05Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"trossen_subversion": "v1.0",
"robot_type": "trossen_ai_stationary",
"total_episodes": 1,
"total_frames": 299,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_low": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
cobordism/mixed_pa-le-an-15k | cobordism | 2024-11-05T13:25:15Z | 22 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-05T13:24:55Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conversations
sequence:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 608771492.0
num_examples: 15000
download_size: 590275560
dataset_size: 608771492.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/cfa_extracted_exercise_sup_sample_from_policy_dpo_binarized | ZixuanKe | 2024-11-11T05:22:14Z | 65 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-11T02:45:56Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 6225.333333333333
num_examples: 4
- name: validation
num_bytes: 20573.0
num_examples: 11
download_size: 32211
dataset_size: 26798.333333333332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
AadyaM/GPT_4o_mini_Fine_tune | AadyaM | 2025-01-17T07:04:36Z | 14 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [] | 2025-01-17T07:04:09Z | 0 | ---
license: apache-2.0
---
|
uzair921/QWEN32B_R1_CONLL2003_LLM_CONTEXT_75 | uzair921 | 2025-05-02T15:43:28Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T15:43:24Z | 0 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 2361383
num_examples: 9568
- name: validation
num_bytes: 866541
num_examples: 3250
- name: test
num_bytes: 784956
num_examples: 3453
download_size: 1007434
dataset_size: 4012880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_a8387e6c-26d7-448c-9c25-df50b26ef0be | argilla-internal-testing | 2024-11-29T12:43:12Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-29T12:43:11Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SecureFinAI-Lab/Regulations_NER | SecureFinAI-Lab | 2025-06-24T22:43:29Z | 36 | 0 | [
"license:cdla-permissive-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-20T22:23:09Z | 0 | ---
license: cdla-permissive-2.0
---
# Overview
This question set is created to evaluate LLMs' ability for named entity recognition (NER) in financial regulatory texts. It is developed for a task at [Regulations Challege @ COLING 2025](https://coling2025regulations.thefin.ai/home). The objective is to accurately identify and classify entities, including organizations, legislation, dates, monetary values, and statistics.
Financial regulations often require supervising and reporting on specific entities, such as organizations, financial products, and transactions, and cite corresponding legal provisions. NER helps to recognize and extract such entities from large amounts of text, thereby improving the efficiency of compliance processes and ensuring more accurate reporting. We evaluate LLMs’ ability in NER about the European OTC derivative market, regulated under EMIR.
# Statistics
| Category | Count | Authority |
|-----------------------------|------:|----------------|
| EMIR | 49 | ESMA |
# Metrics
The F1 score is used.
# License
The question set is licensed under [CDLA-Permissive-2.0](https://cdla.dev/permissive-2-0/). It is a permissive open data license. It allows anyone to freely use, modify, and redistribute the dataset, including for commercial purposes, provided that the license text is included with any redistributed version. There are no restrictions on the use or licensing of any outputs, models, or results derived from the data.
# Related tasks
Regulations Challenge at COLING 2025: https://coling2025regulations.thefin.ai/home
|
SuryaKrishna02/therapy-instruct | SuryaKrishna02 | 2024-12-17T23:11:17Z | 68 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-17T23:11:14Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2466092
num_examples: 1329
- name: test
num_bytes: 283034
num_examples: 173
download_size: 495432
dataset_size: 2749126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
DKYoon/qwq-nonambigqa-slope | DKYoon | 2025-04-24T12:52:09Z | 10 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-24T12:52:06Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
dtype: string
- name: index
dtype: string
- name: prompt
dtype: string
- name: prompt_length
dtype: int64
- name: prompt_pct
dtype: int64
splits:
- name: validation
num_bytes: 21282384
num_examples: 11000
download_size: 3193036
dataset_size: 21282384
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
tom-010/enwiki-answerability-2411-v2 | tom-010 | 2024-11-09T08:30:03Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-09T08:30:00Z | 0 | ---
dataset_info:
features:
- name: article
dtype: string
- name: article_sha1
dtype: string
- name: section_idx
dtype: int32
- name: section_title
dtype: string
- name: content
dtype: string
- name: questions
struct:
- name: questions
sequence: string
- name: adversarial
sequence: string
splits:
- name: train
num_bytes: 81779688
num_examples: 25745
download_size: 36484067
dataset_size: 81779688
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rhinopithecus/so101_pickandplace_whitecube_redbox_20250619_1009 | rhinopithecus | 2025-06-19T08:47:34Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-19T08:41:54Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 29,
"total_frames": 19850,
"total_tasks": 1,
"total_videos": 29,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:29"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jchandru08/so100_press_red_button | jchandru08 | 2025-04-13T00:50:31Z | 71 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-04-13T00:05:19Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 16,
"total_frames": 9458,
"total_tasks": 1,
"total_videos": 64,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:16"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.back": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
CohenQu/CoRA-eval_AIME2025-hint5 | CohenQu | 2025-05-14T02:46:12Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-14T02:46:06Z | 0 | ---
dataset_info:
- config_name: Qwen3-1.7B_AIME2025-hint5_16384
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: responses
sequence: string
- name: rewards
sequence: int64
- name: mean_reward
dtype: float64
splits:
- name: test
num_bytes: 11059377
num_examples: 30
download_size: 3890760
dataset_size: 11059377
- config_name: SolGen_baseline-easy-8k-med16k_Qwen3-1.7B_AIME2025-hint5_16384
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: responses
sequence: string
- name: rewards
sequence: int64
- name: mean_reward
dtype: float64
splits:
- name: test
num_bytes: 10827865
num_examples: 30
download_size: 4345108
dataset_size: 10827865
- config_name: SolGen_easy-mix-zerorew_Qwen3-1.7B_v4_AIME2025-hint5_16384
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: responses
sequence: string
- name: rewards
sequence: int64
- name: mean_reward
dtype: float64
splits:
- name: test
num_bytes: 10460023
num_examples: 30
download_size: 3921149
dataset_size: 10460023
configs:
- config_name: Qwen3-1.7B_AIME2025-hint5_16384
data_files:
- split: test
path: Qwen3-1.7B_AIME2025-hint5_16384/test-*
- config_name: SolGen_baseline-easy-8k-med16k_Qwen3-1.7B_AIME2025-hint5_16384
data_files:
- split: test
path: SolGen_baseline-easy-8k-med16k_Qwen3-1.7B_AIME2025-hint5_16384/test-*
- config_name: SolGen_easy-mix-zerorew_Qwen3-1.7B_v4_AIME2025-hint5_16384
data_files:
- split: test
path: SolGen_easy-mix-zerorew_Qwen3-1.7B_v4_AIME2025-hint5_16384/test-*
---
|
JunHill/imdb_description | JunHill | 2024-10-13T11:36:59Z | 27 | 0 | [
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-11T17:33:33Z | 0 | ---
language:
- en
dataset_info:
features:
- name: movie_id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
- name: originalTitle
dtype: string
- name: primaryTitle
dtype: string
- name: titleType
dtype: string
- name: genres
dtype: string
- name: endYear
dtype: string
- name: token_length
dtype: int64
splits:
- name: train
num_bytes: 34096120
num_examples: 23796
- name: test
num_bytes: 33037937
num_examples: 23506
download_size: 41324633
dataset_size: 67134057
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
KilicMehmet/sagli_ds | KilicMehmet | 2025-04-12T15:16:56Z | 18 | 0 | [
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-12T15:16:54Z | 0 | ---
dataset_info:
features:
- name: Soru;cevap
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 0
num_examples: 0
- name: validation
num_bytes: 0
num_examples: 0
download_size: 1600
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
MLexperiments/prompt-injection-verizon | MLexperiments | 2025-05-06T12:11:06Z | 34 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-05T07:55:53Z | 0 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 108347
num_examples: 1664
- name: train
num_bytes: 357588
num_examples: 4465
download_size: 336973
dataset_size: 465935
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ZHUZHUXIADDD/so100_test113 | ZHUZHUXIADDD | 2025-03-31T11:08:56Z | 30 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-03-31T11:04:41Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 15734,
"total_tasks": 1,
"total_videos": 100,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
TribeAlpha/finetuning_demo5 | TribeAlpha | 2025-03-03T05:45:41Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-03T05:45:38Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 3349
num_examples: 10
download_size: 2493
dataset_size: 3349
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.