Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ntnu-smil/unseen_1964_qa_keyBERT_normalized | ntnu-smil | 2025-05-05T19:01:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T18:59:08Z | 0 | ---
dataset_info:
features:
- name: speaker_id
dtype: int64
- name: grade
dtype: float64
- name: classification
dtype: int64
- name: asr_transcription
dtype: string
- name: wav_path
dtype: string
- name: prompt
dtype: string
- name: example
dtype: string
- name: confidence_score
dtype: float64
- name: acoustic_score
dtype: float64
- name: lm_score
dtype: float64
- name: pitch
dtype: float64
- name: intensity
dtype: float64
- name: pause(sec)
dtype: float64
- name: silence(sec)
dtype: float64
- name: duration(sec)
dtype: float64
- name: wpm
dtype: float64
- name: total_num_word
dtype: int64
- name: level_1
dtype: int64
- name: level_2
dtype: float64
- name: level_3
dtype: float64
- name: seq_len
dtype: int64
- name: key1
dtype: float64
- name: key2
dtype: float64
- name: key3
dtype: float64
- name: key4
dtype: float64
- name: avg
dtype: float64
- name: all_avg_score
dtype: float64
- name: Threshold_Count
dtype: int64
- name: mean_pitch
dtype: float64
- name: mean_intensity
dtype: float64
- name: duration
dtype: float64
- name: localJitter
dtype: float64
- name: localShimmer
dtype: float64
- name: rapJitter
dtype: float64
- name: long_silence
dtype: float64
- name: silence
dtype: float64
- name: long_silence_num
dtype: int64
- name: silence_num
dtype: int64
- name: std_energy
dtype: float64
- name: avg_spectral
dtype: float64
- name: avg_energy_entropy
dtype: float64
- name: zero_cross_num
dtype: int64
- name: v_to_uv_ratio
dtype: float64
- name: voice_count
dtype: int64
- name: unvoice_count
dtype: int64
- name: mean_long_silence
dtype: float64
- name: mean_silence
dtype: float64
- name: more3word
dtype: int64
- name: num_word
dtype: int64
- name: whisperX_transcription
dtype: string
- name: delivery_vec
dtype: string
- name: num_sentence
dtype: int64
- name: uh_count
dtype: int64
- name: num_silence
dtype: int64
- name: num_long_silence
dtype: int64
- name: serrant
sequence: int64
- name: level_counts
sequence: int64
- name: total_words
dtype: int64
- name: example1
dtype: string
- name: example2
dtype: string
- name: example3
dtype: string
- name: example4
dtype: string
- name: example1_score
dtype: float64
- name: example2_score
dtype: float64
- name: example3_score
dtype: float64
- name: example4_score
dtype: float64
- name: form_id
dtype: string
- name: asr
dtype: string
- name: response1
dtype: string
- name: response2
dtype: string
- name: response3
dtype: string
- name: response4
dtype: string
- name: response1_score
dtype: float64
- name: response2_score
dtype: float64
- name: response3_score
dtype: float64
- name: response4_score
dtype: float64
- name: similarity1
dtype: float64
- name: similarity2
dtype: float64
- name: similarity3
dtype: float64
- name: similarity4
dtype: float64
splits:
- name: train
num_bytes: 24527711
num_examples: 719
- name: validation
num_bytes: 3282551
num_examples: 90
- name: test
num_bytes: 3109862
num_examples: 90
- name: fulltest
num_bytes: 10473727
num_examples: 300
download_size: 24780892
dataset_size: 41393851
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: fulltest
path: data/fulltest-*
---
|
thenewsupercell/fixed-fakeavceleb | thenewsupercell | 2025-01-29T02:17:54Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-29T02:17:29Z | 0 | ---
dataset_info:
features:
- name: video
dtype: video
- name: label
dtype:
class_label:
names:
'0': FakeVideo-FakeAudio
'1': FakeVideo-RealAudio
'2': RealVideo-FakeAudio
'3': RealVideo-RealAudio
- name: source
dtype: string
- name: target1
dtype: string
- name: target2
dtype: string
- name: method
dtype: string
- name: category
dtype: string
- name: type
dtype: string
- name: race
dtype: string
- name: gender
dtype: string
- name: 'Unnamed: 9'
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4821185360.872
num_examples: 15999
- name: validation
num_bytes: 634302985.581
num_examples: 2111
- name: test
num_bytes: 648856339.776
num_examples: 2124
download_size: 10934493
dataset_size: 6104344686.229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
barma7/oai-t2maps-epgfit | barma7 | 2025-01-15T20:58:40Z | 45 | 1 | [
"language:en",
"license:mit",
"region:us",
"medical",
"biology",
"cartilage",
"t2mapping",
"OAI"
] | [] | 2024-10-04T05:16:18Z | 0 | ---
license: mit
language:
- en
tags:
- medical
- biology
- cartilage
- t2mapping
- OAI
pretty_name: OAI T2 maps with EPG fitting
---
# Osteoarthritis Initiative (OAI) T2 Maps – EPG Fit Dataset
This dataset repository contains T2 maps derived from the Multi-Echo Spin-Echo (MESE) MRI data in the Osteoarthritis Initiative (OAI).
The maps were generated specifically for cartilage regions using the Extended Phase Graph (EPG) formalism, which improves the accuracy and reproducibility of cartilage T2 mapping, as detailed in the work of Marco Barbieri, Anthony A. Gatti, and Feliks Kogan (2024) [https://doi.org/10.1002/jmri.29646](https://doi.org/10.1002/jmri.29646).
The graphical abstract of the work is reported below, showing that EPG modeling improved reproducibility in cartilage T2 in a cohort of healthy subjects from the OAI dataset.

## Dataset Structure
### Files and Folders
The dataset is organized by acquisition timepoints. Each main folder represents a timepoint in the OAI dataset and contains subfolders for individual subjects.
- **Timepoints**: `00m`, `12m`, `24m`, `36m`, `48m`, `72m`, `96m`.
- **Subject folders**: Each folder name is the unique OAI subject ID (e.g., `9000099`).
Within each subject folder:
- **`t2.nii.gz`**: The T2 map computed using the EPG dictionary fitting method, specific to cartilage tissue.
- **`r2.nii.gz`**: The r-squared value of the fit (goodness of fit).
### MESE Data Location Files
For each acquisition timepoint (e.g., `00_month_mese_locations.csv`, `12_month_mese_locations.csv`, etc), a CSV file provides a mapping to the original MESE data within the OAI dataset. Each CSV file includes the following columns:
- **subject_id**: The unique identifier for each OAI subject.
- **visit**: The month corresponding to the acquisition timepoint (e.g., 36 for `36m`).
- **laterality**: Indicates whether the MESE data is from the **RIGHT** or **LEFT** knee.
- **dicom_mese_path**: The relative path to the original DICOM MESE data within the OAI dataset.
- **t2map_nifti_path**: The relative path to the computed T2 map for that subject, located in this dataset.
These CSV files will help locate the original MESE DICOM data within the OAI dataset.
### Features
- **Subject ID** (str): Unique identifier for each subject in the OAI study.
- **T2 Map (`t2.nii.gz`)**: Computed T2 map for cartilage using the EPG fitting method.
- **R-Squared Map (`r2.nii.gz`)**: Fit accuracy metric for the T2 computation.
## Cartilage-Specific T2 Mapping
The T2 map in this dataset is provided **only for cartilage regions**, as the EPG model used in the computation is specifically designed for cartilage MR properties. To speed up computation, we have exploited segmented cartilage regions from the femoral, tibial, and patellar regions. Here’s the complete mapping process:
1. **Cartilage Segmentation**: For each subject, the femoral, tibial, and patellar cartilage were segmented from the corresponding Double Echo Steady State (DESS) image using the [ShapeMedKneeModel](https://huggingface.co/aagatti/ShapeMedKnee).
2. **Registration to MESE Images**: The segmented cartilage masks were then registered to the MESE images using [Elastix](https://github.com/SuperElastix/elastix), ensuring anatomical alignment across sequences.
3. **Dilated Mask for T2 Mapping**: A dilated version of the cartilage mask was used during the T2 mapping process to allow researchers the flexibility to apply their segmentations if desired. This ensures that cartilage boundaries are fully captured while also accounting for anatomical variations.
The cartilage segmentations used for the OAI dataset are available in the public repository [ShapeMedKnee](https://huggingface.co/aagatti/ShapeMedKnee) and will be regularly maintained and updated there.
## Dataset Creation
The T2 maps in this dataset were generated from the MESE data in the OAI dataset using the Extended Phase Graph (EPG) fitting method as described in the work by [Barbieri, Gatti, and Kogan, published in *Journal of Magnetic Resonance Imaging* (2024)](https://doi.org/10.1002/jmri.29646).
The code used to perform this fitting is open-source and accessible on GitHub at [EPGfit_for_cartilage_T2_mapping](https://github.com/barma7/EPGfit_for_cartilage_T2_mapping).
## Getting Started
### Installation
You can install and access the dataset using the `datasets` library:
```bash
pip install datasets
```
### Usage
Load and interact with the dataset in Python:
```python
from datasets import load_dataset
dataset = load_dataset("barma7/oai-t2maps-epgfit")
# Accessing a specific timepoint and subject data
print(dataset["00m"]["9000099"]["t2"])
print(dataset["00m"]["9000099"]["r2"])
```
## Dataset Details
- **File Size**: Each T2 map file (`t2.nii.gz`) and r-squared file (`r2.nii.gz`) are stored in compressed `.nii.gz` format, with sizes varying per subject and time point.
- **Number of Samples**: Covers subjects across seven OAI acquisition timepoints for which MESE was available.
- **File Format**: `.nii.gz` files.
## License
This dataset is licensed under the MIT License, which allows for free use, modification, and distribution with attribution. For full license details, please see the LICENSE file in this repository.
---
## Acknowledgments
This dataset was created based on the Osteoarthritis Initiative (OAI) dataset. The authors of this repository acknowledge the original OAI study and the contributions of all OAI collaborators.
## Citation
If you use this dataset in your research, please cite:
Barbieri, M., Gatti, A.A. and Kogan, F. (2024), Improving Accuracy and Reproducibility of Cartilage T2 Mapping in the OAI Dataset Through Extended Phase Graph Modeling. J Magn Reson Imaging. https://doi.org/10.1002/jmri.29646
|
Aadharsh/ACD-Audios-text-tags-v1 | Aadharsh | 2024-12-04T09:44:05Z | 15 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-04T09:21:49Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: speaker_id
dtype: int64
- name: gender
dtype: string
splits:
- name: train
num_bytes: 1312570
num_examples: 3324
download_size: 764817
dataset_size: 1312570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
google/cvss | google | 2024-02-10T04:34:53Z | 119 | 14 | [
"language:en",
"language:ar",
"language:ca",
"language:cy",
"language:de",
"language:es",
"language:et",
"language:fa",
"language:fr",
"language:id",
"language:it",
"language:ja",
"language:lv",
"language:mn",
"language:nl",
"language:pt",
"language:ru",
"language:sl",
"language:sv",
"language:ta",
"language:tr",
"language:zh",
"license:cc-by-4.0",
"arxiv:2201.03713",
"region:us"
] | [] | 2022-08-11T00:54:54Z | 1 | ---
license: cc-by-4.0
language:
- en
- ar
- ca
- cy
- de
- es
- et
- fa
- fr
- id
- it
- ja
- lv
- mn
- nl
- pt
- ru
- sl
- sv
- ta
- tr
- zh
---
# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus
*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus.
CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:
- *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.
- *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.
Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.
In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.
Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets.
# Load the data
The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from [Common Voice v4.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0) separately, and join them by the file names.
```py
from datasets import load_dataset
# Load only ar-en and ja-en language pairs. Omitting the `languages` argument
# would load all the language pairs.
cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja'])
# Print the structure of the dataset.
print(cvss_c)
```
# License
CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
## Citation
Please cite this paper when referencing the CVSS corpus:
```
@inproceedings{jia2022cvss,
title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation},
author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga},
booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)},
pages={6691--6703},
year={2022}
}
```
|
zcamz/ai-vs-human-meta-llama-Llama-3.2-1B-Instruct | zcamz | 2024-12-08T10:53:54Z | 120 | 0 | [
"task_categories:text-classification",
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"text-generation"
] | 2024-12-08T10:53:52Z | 0 | ---
license: mit
task_categories:
- text-classification
- text-generation
language:
- en
pretty_name: AI vs Human CNN Daily News
size_categories:
- 1K<n<10K
---
# AI vs Human dataset on the [CNN Daily mails](https://huggingface.co/datasets/abisee/cnn_dailymail)
## Dataset Description
This dataset showcases pairs of truncated articles and their respective completions, crafted either by humans or an AI language model.
Each article was randomly truncated between 25% and 50% of its length.
The language model was then tasked with generating a completion that mirrored the characters count of the original human-written continuation.
## Data Fields
- 'human': The original human-authored continuation of the truncated article, preserved in its entirety.
- 'ai': The AI-generated continuation of the truncated article, designed to match the original in length and coherence.
## Model and Sampling Parameters
The model used to generate the AI completions was meta-llama/Llama-3.2-1B-Instruct.
The sampling parameters used were:
{'frequency_penalty': 0.2, 'max_tokens': 1000, 'presence_penalty': 0.5, 'temperature': 0.5}
## License
MIT License
|
saheedniyi/ccsmall | saheedniyi | 2025-05-11T13:17:09Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T13:17:00Z | 0 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: index
dtype: int64
- name: Type
dtype: string
- name: FileID
dtype: string
- name: Channel
dtype: int64
- name: Start
dtype: float64
- name: Duration
dtype: float64
- name: Speaker
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 23918320.0
num_examples: 292
download_size: 23895011
dataset_size: 23918320.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_39f674da-ba84-4e66-a620-c3417d8f7d11 | argilla-internal-testing | 2024-10-03T08:27:12Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-03T08:27:11Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bngomez/my-distiset-624b87dd | bngomez | 2025-02-15T21:28:37Z | 10 | 0 | [
"task_categories:text-classification",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-classification"
] | 2025-02-15T21:28:35Z | 0 | ---
size_categories: n<1K
task_categories:
- text-classification
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': case-dismissal
'1': tenant-protection
'2': court-decision
'3': landlord-protection
'4': statute-violation
splits:
- name: train
num_bytes: 4788
num_examples: 10
download_size: 5979
dataset_size: 4788
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-624b87dd
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/bngomez/my-distiset-624b87dd/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/bngomez/my-distiset-624b87dd/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"labels": [
3,
2,
4
],
"text": "A Kansas landlord is required to provide a written notice to a tenant before entering the rental property. The notice must be given at least 24 hours prior to the intended entry date and time. Failure to comply with this requirement may result in a tenant\u0027s right to withhold rent or seek damages."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("bngomez/my-distiset-624b87dd", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("bngomez/my-distiset-624b87dd")
```
</details>
|
AJNG/w2v-bert-2.0-nepali-transliterator | AJNG | 2025-02-19T15:55:12Z | 17 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T15:54:14Z | 0 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 618159450.9302325
num_examples: 1320
- name: validation
num_bytes: 155008165.3468992
num_examples: 331
- name: test
num_bytes: 193408979.7228682
num_examples: 413
download_size: 935860990
dataset_size: 966576596.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
valcarese/RAGtry | valcarese | 2025-02-04T19:23:37Z | 23 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"text-retrieval",
"question-answering",
"sentence-similarity"
] | 2025-02-04T19:23:36Z | 0 | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- text-retrieval
- question-answering
- sentence-similarity
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: positive_retrieval
dtype: string
- name: negative_retrieval
dtype: string
splits:
- name: train
num_bytes: 39052
num_examples: 12
download_size: 33970
dataset_size: 39052
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for RAGtry
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/valcarese/RAGtry/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/valcarese/RAGtry/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"context": "**Demographic Information**\n\n1. Family Structure: \n - Number of parents/guardians\n - Ages and relationships of all household members\n - Information on any foster care or previous placements\n\n2. Family Dynamics:\n - History of domestic violence or substance abuse\n - Quality of parent-child relationships and interactions\n - Presence of conflict between family members\n\n**Risk Indicators**\n\n1. Environmental Risks\n - Living conditions: overcrowding, cleanliness, and access to basic necessities\n - Neighborhood safety and community resources\n - Availability of stable food and housing arrangements\n\n2. Child Developmental Risks\n - Academic performance and attendance\n - Behavioral issues and emotional distress\n - Physical health and immunization status\n\n**Assessment of Parental Capabilities**\n\n1. Parental Supervision and Support\n - Frequency and quality of parent-child interactions\n - Parental involvement in child\u0027s education and activities\n\n2. Parental Well-being\n - History of mental health issues or substance abuse\n - Parental education level and employment status\n\n**Intervention Strategies**\n\n1. Home visits and family counseling to address relationship dynamics and parenting skills\n2. Referral to community resources for food, housing, and job training\n3. Psychological evaluations and therapy for children and parents\n4. Regular check-ins with social workers and healthcare professionals to monitor progress and adjust interventions as needed\n\n**Child Safety Plan**\n\n1. Immediate safety plan for removing child from home if necessary\n2. Placement options: foster care, kinship care, or adoption\n3. Development of a long-term plan for reunification or alternative care arrangement\n\n**Prevention and Support Services**\n\n1. Case management and coordination with community agencies\n2. Parenting education and resource connection\n3. Monthly home visits and regular check-ins with family\n\n**Behavioral and Psychological Assessments**\n\n1. Conduct regular home visits to assess family dynamics and environment\n2. Use standardized tools to evaluate child and parent functioning and well-being\n3. Psychological evaluations for children and parents to identify areas for support and intervention",
"negative_retrieval": "What demographic information should be collected for this family to determine their eligibility for a government assistance program?",
"positive_retrieval": "What demographic information should be collected for this family to develop an effective intervention plan?",
"question": "What demographic information should be collected for this family to develop an effective intervention plan?",
"response": "Based on the provided document, the following demographic information should be collected to develop an effective intervention plan:\n\n1. Number of parents/guardians\n2. Ages and relationships of all household members\n3. Information on any foster care or previous placements\n4. Family structure (i.e., whether the family is a single-parent household or multi-parent household)"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("valcarese/RAGtry", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("valcarese/RAGtry")
```
</details>
|
tranthanhnguyenai1/NvdiaOpenInstructCode_8 | tranthanhnguyenai1 | 2025-06-19T02:47:04Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-19T02:20:44Z | 0 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: llm_judgement
dtype: string
- name: unit_tests
dtype: string
- name: tests_execution_status
dtype: string
- name: average_test_score
dtype: string
splits:
- name: train
num_bytes: 410999212.875504
num_examples: 109716
download_size: 131846473
dataset_size: 410999212.875504
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HHS-Official/weekly-united-states-covid-19-hospitalization-metr | HHS-Official | 2025-05-07T20:44:47Z | 0 | 0 | [
"language:en",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"hhs",
"cdc",
"admissions",
"coronavirus",
"covid-19",
"hospitalizations"
] | [] | 2025-05-07T20:44:45Z | 0 | ---
language:
- en
pretty_name: Weekly United States COVID-19 Hospitalization Metrics by Jurisdiction
– ARCHIVED
tags:
- hhs
- cdc
- admissions
- coronavirus
- covid-19
- hospitalizations
---
# Weekly United States COVID-19 Hospitalization Metrics by Jurisdiction – ARCHIVED
## Description
<b>Note:</b> After May 3, 2024, this dataset will no longer be updated because hospitals are no longer required to report data on COVID-19 hospital admissions, hospital capacity, or occupancy data to HHS through CDC’s National Healthcare Safety Network (NHSN). The related CDC COVID Data Tracker site was revised or retired on May 10, 2023.
This dataset represents weekly COVID-19 hospitalization data and metrics aggregated to national, state/territory, and regional levels. COVID-19 hospitalization data are reported to CDC’s National Healthcare Safety Network, which monitors national and local trends in healthcare system stress, capacity, and community disease levels for approximately 6,000 hospitals in the United States. Data reported by hospitals to NHSN and included in this dataset represent aggregated counts and include metrics capturing information specific to COVID-19 hospital admissions, and inpatient and ICU bed capacity occupancy.
<b>Reporting information:</b><ul><li>As of December 15, 2022, COVID-19 hospital data are required to be reported to NHSN, which monitors national and local trends in healthcare system stress, capacity, and community disease levels for approximately 6,000 hospitals in the United States. Data reported by hospitals to NHSN represent aggregated counts and include metrics capturing information specific to hospital capacity, occupancy, hospitalizations, and admissions. Prior to December 15, 2022, hospitals reported data directly to the U.S. Department of Health and Human Services (HHS) or via a state submission for collection in the HHS Unified Hospital Data Surveillance System (UHDSS).</li><li>While CDC reviews these data for errors and corrects those found, some reporting errors might still exist within the data. To minimize errors and inconsistencies in data reported, CDC removes outliers before calculating the metrics. CDC and partners work with reporters to correct these errors and update the data in subsequent weeks.</li><li>Many hospital subtypes, including acute care and critical access hospitals, as well as Veterans Administration, Defense Health Agency, and Indian Health Service hospitals, are included in the metric calculations provided in this report. Psychiatric, rehabilitation, and religious non-medical hospital types are excluded from calculations.</li><li>Data are aggregated and displayed for hospitals with the same Centers for Medicare and Medicaid Services (CMS) Certification Number (CCN), which are assigned by CMS to counties based on the CMS Provider of Services files.</li><li>Full details on COVID-19 hospital data reporting guidance can be found here: https://www.hhs.gov/sites/default/files/covid-19-faqs-hospitals-hospital-laboratory-acute-care-facility-data-reporting.pdf</li></ul>
<b>Metric details:</b><ul><li><b>Time Period:</b> timeseries data will update weekly on Mondays as soon as they are reviewed and verified, usually before 8 pm ET. Updates will occur the following day when reporting coincides with a federal holiday. Note: Weekly updates might be delayed due to delays in reporting. All data are provisional. Because these provisional counts are subject to change, including updates to data reported previously, adjustments can occur. Data may be updated since original publication due to delays in reporting (to account for data received after a given Thursday publication) or data quality corrections.</li><li><b>New COVID-19 Hospital Admissions (count):</b> Number of new admissions of patients with laboratory-confirmed COVID-19 in the previous week (including both adult and pediatric admissions) in the entire jurisdiction.</li><li><b>New COVID-19 Hospital Admissions (7-Day Average):</b> 7-day average of new admissions of patients with laboratory-confirmed COVID-19 in the previous week (including both adult and pediatric admissions) in the entire jurisdiction.</li><li><b>Cumulative COVID-19 Hospital Admissions:</b> Cumulative total number of admissions of patients with labo
## Dataset Details
- **Publisher**: Centers for Disease Control and Prevention
- **Temporal Coverage**: 8/1/2020 - 5/3/2024
- **Geographic Coverage**: US
- **Last Modified**: 2025-02-23
- **Contact**: CDC-INFO ([email protected])
## Source
Original data can be found at: https://data.cdc.gov/d/7dk4-g6vg
## Usage
You can load this dataset using:
```python
from datasets import load_dataset
dataset = load_dataset('HHS-Official/weekly-united-states-covid-19-hospitalization-metr')
```
## License
This dataset is licensed under https://www.usa.gov/government-works
|
malaysia-ai/DBP-Dialect | malaysia-ai | 2025-05-23T04:16:06Z | 0 | 0 | [
"language:ms",
"region:us"
] | [] | 2025-05-23T03:44:51Z | 0 | ---
language:
- ms
---
# DBP Dialect
Manually extract from https://prpm.dbp.gov.my/CarianBestari?mode=ext&keyword=
## Source code
Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/dictionary/dialect-dbp |
mlfoundations-dev/nemo_nano_science_30k | mlfoundations-dev | 2025-05-06T05:31:52Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T05:31:03Z | 0 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 518805882.7546126
num_examples: 31600
download_size: 248396987
dataset_size: 518805882.7546126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_3_dataset_1_for_gen_9_v2 | HungVu2003 | 2025-05-06T14:29:44Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T14:29:42Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7255012
num_examples: 14998
download_size: 3337042
dataset_size: 7255012
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tippawan/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 | Tippawan | 2025-03-18T10:07:29Z | 29 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-18T10:07:27Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2153419
num_examples: 19
download_size: 395842
dataset_size: 2153419
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rubenroy/WikiMed-200k | rubenroy | 2025-04-12T00:43:27Z | 15 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-04-12T00:42:35Z | 0 | ---
license: apache-2.0
---
|
WangBiao/R1-Track-5k | WangBiao | 2025-04-03T08:48:04Z | 20 | 1 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-02T13:50:00Z | 0 | ---
license: mit
language:
- en
--- |
evageon/IADD | evageon | 2022-01-29T11:16:17Z | 57 | 0 | [
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 0 | ---
license: cc-by-4.0
---
# IADD
IADD is an Integrated Dataset for Arabic Dialect iDentification Dataset. It contains 136,317 texts representing 5 regions (Maghrebi (MGH) , Levantine (LEV), Egypt (EGY) , Iraq (IRQ) and Gulf (GLF)) and 9 countries (Algeria, Morocco, Tunisia, Palestine, Jordan, Syria, Lebanon, Egypt and Iraq).
IADD is created from the combination of subsets of five corpora: DART, SHAMI, TSAC, PADIC and AOC. The Dialectal ARabic Tweets dataset (DART) [1] has about 25,000 tweets that are annotated via crowdsourcing while the SHAMI dataset [2] consists of 117,805 sentences and covers levantine dialects spoken in Palestine, Jordan, Lebanon and Syria. TSAC [3] is a Tunisian dialect corpus of 17,000 comments collected mainly from Tunisian Facebook pages. Parallel Arabic Dialect Corpus (PADIC) [4] is made of sentences transcribed from recordings or translated from MSA. Finally, the Arabic Online Commentary (AOC) dataset [5] is based on reader commentary from the online versions of three Arabic newspapers, and it consists of 1.4M comments.
IADD is stored in a JSON-like format with the following keys:
- Sentence: contains the sentence/ text;
- Region: stores the corresponding dialectal region (MGH, LEV, EGY, IRQ, GLF or general);
- Country: specifies the corresponding country, if available (MAR, TUN, DZ, EGY, IRQ, SYR, JOR, PSE, LBN);
- DataSource: indicates the source of the data (PADIC, DART, AOC, SHAMI or TSAC).
[1] Alsarsour, I., Mohamed, E., Suwaileh, R., & Elsayed, T. (2018, May). Dart: A large dataset of dialectal arabic tweets. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
[2] Abu Kwaik, K., Saad, M. K., Chatzikyriakidis, S., & Dobnik, S. (2018). Shami: A corpus of levantine arabic dialects. In Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018).
[3] Mdhaffar, S., Bougares, F., Esteve, Y., & Hadrich-Belguith, L. (2017, April). Sentiment analysis of tunisian dialects: Linguistic ressources and experiments. In Third Arabic Natural Language Processing Workshop (WANLP) (pp. 55-61).
[4] Meftouh, K., Harrat, S., Jamoussi, S., Abbas, M., & Smaili, K. (2015, October). Machine translation experiments on PADIC: A parallel Arabic dialect corpus. In The 29th Pacific Asia conference on language, information and computation.
[5] Zaidan, O., & Callison-Burch, C. (2011, June). The arabic online commentary dataset: an annotated dataset of informal arabic with high dialectal content. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (pp. 37-41).
|
huggingartists/joji | huggingartists | 2022-10-25T09:32:26Z | 28 | 0 | [
"language:en",
"size_categories:n<1K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"huggingartists",
"lyrics"
] | [] | 2022-03-02T23:29:22Z | 0 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/joji"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.211227 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d20ee1f900287060716f7594ccba7ea3.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/joji">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joji</div>
<a href="https://genius.com/artists/joji">
<div style="text-align: center; font-size: 14px;">@joji</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/joji).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/joji")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|159| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/joji")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Rajarshi-Roy-research/all-temp-fpo-train-data | Rajarshi-Roy-research | 2025-01-06T05:34:51Z | 20 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-05T18:36:32Z | 0 | ---
dataset_info:
features:
- name: abstract
dtype: string
- name: web_url
dtype: string
- name: lead_paragraph
dtype: string
- name: Human_story_fetched
dtype: string
- name: web_retrival
dtype: string
- name: rag_context
dtype: string
- name: accepted
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 527308380
num_examples: 32172
download_size: 185192040
dataset_size: 527308380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ssktora/nfcorpus-train-bm25-pyserini-20-train | ssktora | 2025-04-08T07:58:17Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-08T07:58:11Z | 0 | ---
dataset_info:
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 46832136
num_examples: 415
download_size: 24244994
dataset_size: 46832136
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jpata/so100_pushcube_sim | jpata | 2025-01-05T16:19:50Z | 93 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"simulation"
] | [
"robotics"
] | 2024-12-31T18:54:38Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- simulation
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": null,
"total_episodes": 20,
"total_frames": 4216,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.success": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"seed": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"observation.state": {
"dtype": "float32",
"names": null,
"shape": [
6
]
},
"observation.environment_state": {
"dtype": "float32",
"names": null,
"shape": [
6
]
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mthandazo/bnil_code_gen_instruct | mthandazo | 2024-10-25T11:43:12Z | 22 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-25T11:42:45Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 745712423.2
num_examples: 108024
- name: test
num_bytes: 186428105.8
num_examples: 27006
download_size: 330152511
dataset_size: 932140529.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tturing/so100_02_00 | tturing | 2025-02-13T04:29:40Z | 33 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-02-13T04:29:25Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1192,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.rgb": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
deokoon/fine-tuning-tutorial | deokoon | 2025-06-17T05:52:12Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-17T05:46:18Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 7274
num_examples: 32
download_size: 4138
dataset_size: 7274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gswamy/pythia-1.4B-tldr-two-words-gpt-4o-reference-val | gswamy | 2025-02-24T22:48:19Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-24T22:44:22Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_len
dtype: int64
splits:
- name: train
num_bytes: 87589169
num_examples: 6447
download_size: 26908174
dataset_size: 87589169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qihoo360/fgclip-grit-12m | qihoo360 | 2025-05-12T11:25:53Z | 0 | 0 | [
"language:en",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:image",
"arxiv:2505.05071",
"region:us",
"clip"
] | [] | 2025-05-12T03:31:18Z | 0 | ---
tags:
- clip
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: zero-shot-image-classification
size_categories:
- 10M<n<100M
---
# FG-CLIP: Fine-Grained Visual and Textual Alignment
**[FG-CLIP: Fine-Grained Visual and Textual Alignment](https://arxiv.org/abs/2505.05071)**
</br>
Chunyu Xie*, Bin Wang*, Fanjing Kong, Jincheng Li, Dawei Liang, Gengshen Zhang, Dawei Leng†, Yuhui Yin(*Equal Contribution, ✝Corresponding Author)
</br>
[](https://arxiv.org/abs/2505.05071)
[](https://icml.cc/Conferences/2025)
<p align="center">
<img src="https://huggingface.co/qihoo360/fg-clip-large/resolve/main/radar_chart_methods.png" width="500" height="440"/>
</p>
## Model Framework
FG-CLIP’s training proceeds in two stages: the first stage leverages
global-level caption-image pairs to achieve initial fine-grained alignment, while the second stage supplements these with additional
region-level captions, including detailed region captions and positive/negative region descriptions to further refine the alignment.
<p align="center">
<img src="https://huggingface.co/qihoo360/fg-clip-large/resolve/main/fgclip_strc.png" width=80%/>
</p>
# Data Preparation
To run the training code for FG-CLIP, please follow the following step.
### Step 1: Download the model
Download the FG-CLIP model from this link. [🤗Vit-L@336px](https://huggingface.co/qihoo360/fg-clip-large) or
Download the OpenAI CLIP model from this link. [🤗Vit-L@336px](https://huggingface.co/openai/clip-vit-large-patch14-336)
### Step 2: Prepare fgclip-grit-12m Dataset
First, pull the dataset from the following link.
[🤗fgclip-grit-12m](https://huggingface.co/datasets/qihoo360/fgclip-grit-12m),After downloading, you will obtain the following file structure:
```none
fgclip-grit-12m
├── url2key.json
├── jsonfiles
| ├── 2024-12-06_18-32-53_results_10_218_126_44_1025.json
│ ├── 2024-12-06_18-33-17_results_llama70b-shcdt-h100-4gpus-no-2.json
│ ├──...
├── coyo_image_0
| ├── 00000.parquet
│ ├── 00001.parquet
│ ├── ...
│ ├── 00099.parquet
├── coyo_image_1
| ├── 00000.parquet
│ ├── 00001.parquet
│ ├── ...
│ ├── 00099.parquet
├── ...
├── coyo_image_19
| ├── 00000.parquet
│ ├── 00001.parquet
│ ├── ...
│ ├── 00099.parquet
├── ...
```
Subsequently, you need to install the `img2dataset` package. You can do this by running the following command:
```bash
pip install img2dataset
```
Set the `file_in` parameter in the script (`data/get_data.sh`) according to the download path of the data, and also set the directory where you expect to save the files (`pre_dir`, `dir_save`). Subsequently, execute the following commands.
```bash
bash data/get_data.sh
```
Due to the randomness in downloading, the image names corresponding to the URLs do not match the names of the images we are using. Therefore, a conversion is needed. This step requires using the `url2key.json` file included in the fgclip-grit-12m dataset.
```bash
python -m data.convert_image_name \
--url2key_json fgclip-grit-12m/url2key.json \
--down_file_root data/down-grit-12m/ \
--num_parent_folders 20 \
--num_subfolders_per_parent 100 \
--resave_file_root data/grit-12m/ \
rm -r data/down-grit-12m/
```
```none
FG-CLIP
├── ...
├── fgclip-grit-12m
| ├── jsonfiles
| | ├── 2024-12-06_18-32-53_results_10_218_126_44_1025.json
| | ├── 2024-12-06_18-33-17_results_llama70b-shcdt-h100-4gpus-no-2.json
| | ├──...
| ├── url2key.json
| ├── ...
├── data
| ├── grit-12m
| | ├── coyo_image_0
| | | ├──00000
| | | ├──00001
| | | ├──...
| | | ├──00099
| | ├── coyo_image_1
| | | ├──00000
| | | ├──00001
| | | ├──...
| | | ├──00099
├── ...
| | ├── coyo_image_19
| | | ├──00000
| | | ├──00001
| | | ├──...
| | | ├──00099
├── ...
``` |
jbourcier/fgsc23 | jbourcier | 2025-01-24T11:33:28Z | 73 | 0 | [
"license:unknown",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-01-23T17:01:20Z | 0 | ---
license: unknown
---
# FGSC-23
The FGSC-23 dataset was proposed in Zhang, Xiaohan, et al. "A new benchmark and an attribute-guided multilevel feature representation network for fine-grained ship classification in optical remote sensing images." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13 (2020): 1271-1285. https://doi.org/10.1109/JSTARS.2020.2981686
No modifications have been made to this dataset, except the archive format (from 7z to zip). It is rehosted from the original [Baidu Netdisk](https://pan.baidu.com/share/init?surl=h_F7c-btLqhOxLT20XHWBg) location.
| | |
|---|---|
|__Type__| Compressed archive (Zip) |
| __Size (zipped)__ | 95 MiB |
| __Size (unzipped)__ | 103 MiB |
|
supergoose/flan_combined_task905_hate_speech_offensive_classification | supergoose | 2025-03-10T14:30:38Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-10T14:30:37Z | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 12670275
num_examples: 19452
download_size: 3187453
dataset_size: 12670275
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ShikharLLM/0 | ShikharLLM | 2025-06-11T12:33:44Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-11T12:32:43Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 1798701036
num_examples: 457894
download_size: 1428536275
dataset_size: 1798701036
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Zaynoid/HB_alpaca | Zaynoid | 2025-06-09T13:56:08Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-09T13:56:05Z | 0 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 31366740
num_examples: 5709
download_size: 15360963
dataset_size: 31366740
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "HB_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nafisN/my-distiset-poridhi | nafisN | 2025-04-23T22:45:32Z | 25 | 0 | [
"task_categories:text-classification",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-classification"
] | 2025-04-23T22:43:17Z | 0 | ---
size_categories: n<1K
task_categories:
- text-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': very-relevant
'1': irrelevant
'2': somewhat-relevant
splits:
- name: train
num_bytes: 2353
num_examples: 10
download_size: 3838
dataset_size: 2353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-poridhi
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/nafisN/my-distiset-poridhi/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/nafisN/my-distiset-poridhi/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 0,
"text": "The Sony A7R IV Mirrorless Camera features a 61.4 megapixel full-frame Exmor R CMOS sensor, allowing for high-resolution images with excellent dynamic range and low noise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("nafisN/my-distiset-poridhi", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("nafisN/my-distiset-poridhi")
```
</details>
|
neelabh17/new_news_exploded_prompt_n_20_d_perc_0_num_gen_10_Qwen2.5-14B-Instruct_no_mcq | neelabh17 | 2025-05-17T15:39:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-17T15:39:12Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: topic
dtype: string
- name: news
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: option
sequence: string
- name: prompt
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
splits:
- name: train
num_bytes: 4492184
num_examples: 375
download_size: 1531678
dataset_size: 4492184
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jarbas/yes_no_answers | Jarbas | 2024-11-07T12:47:19Z | 39 | 0 | [
"task_categories:text-classification",
"language:en",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2024-11-07T12:24:36Z | 0 | ---
task_categories:
- text-classification
language:
- en
size_categories:
- n<1K
---
This dataset contains a collection of user utterances designed to evaluate models that classify responses to yes/no questions. The dataset includes utterances with clear "yes" or "no" answers, as well as ambiguous, neutral, and conditional responses that may not fit neatly into the binary classification of yes/no.
Dataset Overview
Total Samples: 400+ samples
Categories:
Yes: Clear affirmative responses (e.g., "yes", "that's right", "I agree").
No: Clear negative responses (e.g., "no", "I disagree", "not at all").
None: Ambiguous or neutral responses that cannot be classified as clear "yes" or "no" (e.g., "I’m not sure", "It’s complicated", "Let’s wait and see"). |
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_3facf2e1-a7ea-4bf7-a4c5-92ac401214b3 | argilla-internal-testing | 2024-10-30T11:47:12Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-30T11:47:11Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/open-web-math-backtrack-processed-v2 | Asap7772 | 2025-02-12T04:53:16Z | 7 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-12T04:52:44Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
splits:
- name: train
num_bytes: 625179123.1020231
num_examples: 46467
download_size: 265151641
dataset_size: 625179123.1020231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
openbmb/RLPR-Evaluation | openbmb | 2025-06-24T06:24:41Z | 31 | 2 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2506.18254",
"arxiv:2110.14168",
"arxiv:2206.14858",
"arxiv:2406.01574",
"arxiv:2311.12022",
"arxiv:2305.12524",
"arxiv:2505.14652",
"region:us"
] | [
"question-answering",
"text-generation"
] | 2025-06-22T11:03:59Z | 0 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: RLPR-Evaluation
size_categories:
- Varies by component benchmark
---
# Dataset Card for RLPR-Evaluation
[GitHub](https://github.com/openbmb/RLPR) | [Paper](https://arxiv.org/abs/2506.18254)
## News:
* **[2025.06.23]** 📃 Our paper detailing the RLPR framework and its comprehensive evaluation using this suite is accessible at [here](https://github.com/OpenBMB/RLPR/blob/main/RLPR_paper.pdf)!
## Dataset Summary
We include the following seven benchmarks for evaluation of RLPR:
**Mathematical Reasoning Benchmarks:**
* **MATH-500 ([Cobbe et al., 2021](https://arxiv.org/abs/2110.14168))**
* **Minerva ([Lewkowycz et al., 2022](https://arxiv.org/abs/2206.14858))**
* **AIME24**
**General Domain Reasoning Benchmarks:**
* **MMLU-Pro ([Wang et al., 2024](https://arxiv.org/abs/2406.01574)):** A multitask language understanding benchmark with reasoning-intensive questions. We randomly sample 1000 prompts for a balance of efficiency and variance.
* **GPQA ([Rein et al., 2023](https://arxiv.org/abs/2311.12022)):** Graduate-level questions across disciplines. We use the highest-quality **GPQA-diamond** subset.
* **TheoremQA ([Chen et al., 2023](https://arxiv.org/abs/2305.12524)):** Assesses the ability to apply theorems to solve complex science problems (Math, Physics, etc.). We use 800 high-quality questions, removing 53 multimodal instructions.
* **WebInstruct (Validation Split) ([Ma et al., 2025](https://arxiv.org/abs/2505.14652)):** A held-out validation split from WebInstruct, designed as an accessible benchmark for medium-sized models. We uniformly sample 1k prompts and apply 10-gram deduplication, resulting in **638 distinct questions**.
This multi-faceted suite allows for a thorough evaluation of reasoning capabilities across diverse domains and difficulty levels.
## Usage
```python
from datasets import load_dataset
data = load_dataset("openbmb/RLPR-Evaluation")
```
## Data Fields
The dataset contains the following fields for each sample:
| | Key | Description |
| --- | -------------- | ----------------------------------------------------------------------------------------------- |
| 0 | `data_source` | Identifier for the specific benchmark or split. |
| 1 | `prompt` | The input question or problem statement, potentially with context or instructions. |
| 2 | `ability` | The domain or category of the task. |
| 3 | `reward_model` | Dictionary containing the `ground_truth` answer, essential for scoring. |
| 4 | `extra_info` | Benchmark-specific metadata, such as `answer_type`, `category`, `difficulty`, `id`, or `split`. |
| 5 | `uid` | The uid for item in the dataset |
## Citation
If you use the RLPR framework or refer to our evaluation methodology using this suite, please cite our paper. Additionally, please cite the original papers for any component benchmarks you use:
```bibtex
@misc{yu2025rlprextrapolatingrlvrgeneral,
title={RLPR: Extrapolating RLVR to General Domains without Verifiers},
author={Tianyu Yu and Bo Ji and Shouli Wang and Shu Yao and Zefan Wang and Ganqu Cui and Lifan Yuan and Ning Ding and Yuan Yao and Zhiyuan Liu and Maosong Sun and Tat-Seng Chua},
year={2025},
eprint={2506.18254},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.18254},
}
``` |
mlfoundations-dev/b2_train_fasttext_science | mlfoundations-dev | 2025-04-21T23:50:57Z | 67 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-21T23:33:25Z | 0 | ---
dataset_info:
features:
- name: TRAIN_FASTTEXT_OP_PATH
dtype: 'null'
- name: TRAIN_FASTTEXT_OP_HF_REPO_ID
dtype: string
- name: TRAIN_FASTTEXT_OP_TEXT_COLUMN
dtype: string
- name: TRAIN_FASTTEXT_OP_EPOCH
dtype: int64
- name: TRAIN_FASTTEXT_OP_LR
dtype: float64
- name: TRAIN_FASTTEXT_OP_WORD_NGRAMS
dtype: int64
- name: TRAIN_FASTTEXT_OP_MIN_COUNT
dtype: int64
- name: TRAIN_FASTTEXT_OP_DIM
dtype: int64
splits:
- name: train
num_bytes: 169
num_examples: 1
download_size: 5767
dataset_size: 169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lschoen/germeval21_detox | lschoen | 2024-11-23T08:42:20Z | 14 | 0 | [
"task_categories:text-classification",
"language:de",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2024-11-23T07:54:33Z | 0 | ---
dataset_info:
features:
- name: comment_id
dtype: int64
- name: comment_text
dtype: string
- name: Sub1_Toxic
dtype: int64
- name: Sub2_Engaging
dtype: int64
- name: Sub3_FactClaiming
dtype: int64
splits:
- name: train
num_bytes: 733617
num_examples: 3244
- name: test
num_bytes: 229587
num_examples: 944
download_size: 564666
dataset_size: 963204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: mit
task_categories:
- text-classification
language:
- de
pretty_name: 'DeTox GermEval 2021: Fine grained Comment Classification'
size_categories:
- 100K<n<1M
---
# Dataset for DeTox at GermEval 2021: Fine grained Comment Classification
Has a train test split and 3 labels for each comment: Sub1_Toxic, Sub2_Engaging, and Sub3_Factclaiming.
```
DatasetDict({
train: Dataset({
features: ['comment_id', 'comment_text', 'Sub1_Toxic', 'Sub2_Engaging', 'Sub3_FactClaiming'],
num_rows: 3244
})
test: Dataset({
features: ['comment_id', 'comment_text', 'Sub1_Toxic', 'Sub2_Engaging', 'Sub3_FactClaiming'],
num_rows: 944
})
})
```
# Citation information
Based on the work by Schütz et al.
```biblatex
{schutz-etal-2021-detox,
title = {{{DeTox}} at {{GermEval}} 2021: {{Toxic}} Comment Classification},
booktitle = {Proceedings of the {{GermEval}} 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments},
author = {Schütz, Mina and Demus, Christoph and Pitz, Jonas and Probol, Nadine and Siegel, Melanie and Labudde, Dirk},
editor = {Risch, Julian and Stoll, Anke and Wilms, Lena and Wiegand, Michael},
date = {2021-09},
pages = {54--61},
publisher = {Association for Computational Linguistics},
location = {Duesseldorf, Germany},
url = {https://aclanthology.org/2021.germeval-1.8},
abstract = {In this work, we present our approaches on the toxic comment classification task (subtask 1) of the GermEval 2021 Shared Task. For this binary task, we propose three models: a German BERT transformer model; a multilayer perceptron, which was first trained in parallel on textual input and 14 additional linguistic features and then concatenated in an additional layer; and a multilayer perceptron with both feature types as input. We enhanced our pre-trained transformer model by re-training it with over 1 million tweets and fine-tuned it on two additional German datasets of similar tasks. The embeddings of the final fine-tuned German BERT were taken as the textual input features for our neural networks. Our best models on the validation data were both neural networks, however our enhanced German BERT gained with a F1-score = 0.5895 a higher prediction on the test data.},
}
``` |
yan-wang88/github-issues | yan-wang88 | 2025-03-03T04:14:41Z | 9 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-03T04:14:40Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: sub_issues_summary
struct:
- name: total
dtype: int64
- name: completed
dtype: int64
- name: percent_completed
dtype: int64
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: closed_by
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: user_view_type
dtype: string
- name: site_admin
dtype: bool
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 263157
num_examples: 59
download_size: 156091
dataset_size: 263157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
russwang/MMK12 | russwang | 2025-06-17T18:25:12Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-17T18:07:15Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: image
list:
- name: path
dtype: string
splits:
- name: train
num_bytes: 6222610
num_examples: 15616
download_size: 3253166
dataset_size: 6222610
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
triton7777/eval_so100_test_pi0_mix2 | triton7777 | 2025-02-26T14:12:16Z | 64 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-02-26T13:59:51Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 7056,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.s_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.s_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jkazdan/pku-safe-llama-3.1-8B-Instruct-Completions | jkazdan | 2024-10-30T23:56:06Z | 18 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-30T23:43:43Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 14352
num_examples: 32
download_size: 11154
dataset_size: 14352
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter3_sftsd2_temp1_max_seq_len512 | RylanSchaeffer | 2025-01-18T10:37:02Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-18T10:37:00Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 18465208
num_examples: 12531
download_size: 11262758
dataset_size: 18465208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kfkas/service-tipping-reddit-data-filtered | kfkas | 2025-05-10T13:15:47Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-10T13:15:43Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: text_content
dtype: string
- name: url
dtype: string
- name: score
dtype: int64
- name: num_comments
dtype: int64
- name: created_utc
dtype: float64
- name: relevance_score
dtype: int64
- name: search_keyword
dtype: string
- name: sort_method
dtype: string
- name: tip_percentage
dtype: float64
- name: tip_amount
dtype: float64
- name: situation_caption
dtype: string
- name: outlier_detection
dtype: string
splits:
- name: train
num_bytes: 1324200
num_examples: 533
download_size: 803071
dataset_size: 1324200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ivanleomk/timescale-ecommerce | ivanleomk | 2024-12-23T01:37:33Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-18T13:15:48Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: image
dtype: image
- name: description
dtype: string
- name: brand
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: price
dtype: float64
- name: quantity
dtype: int64
splits:
- name: train
num_bytes: 10943509.0
num_examples: 191
download_size: 8492456
dataset_size: 10943509.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChavyvAkvar/synthetic-trades-XRP-batch-18 | ChavyvAkvar | 2025-06-03T18:28:24Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-03T18:27:23Z | 0 | ---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923448051
num_examples: 1000
download_size: 924486740
dataset_size: 923448051
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MaulikMadhavi/lapel-mic-speech-asr-dataset | MaulikMadhavi | 2025-05-21T17:58:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-21T17:55:52Z | 0 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 36056875.0
num_examples: 100
download_size: 36058956
dataset_size: 36056875.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CianKim/kor_eng_tiny_ED_OB | CianKim | 2025-05-14T06:06:32Z | 99 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T01:39:19Z | 0 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 58589008
num_examples: 61
- name: test
num_bytes: 6723192
num_examples: 7
- name: valid
num_bytes: 12486456
num_examples: 13
download_size: 16056737
dataset_size: 77798656
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_chunk_12 | SayantanJoker | 2025-04-17T03:56:44Z | 19 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-17T03:53:15Z | 0 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 5973439094.0
num_examples: 10000
download_size: 5952648180
dataset_size: 5973439094.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gydou/mssbench_sft | gydou | 2025-04-18T06:21:42Z | 23 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-18T05:49:18Z | 0 | ---
dataset_info:
features:
- name: images
dtype: image
- name: messages
dtype: string
splits:
- name: train
num_bytes: 127578241.0
num_examples: 420
- name: test
num_bytes: 53651667.0
num_examples: 180
download_size: 178114455
dataset_size: 181229908.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
thenewsupercell/masked-mouth-df-image-dataset | thenewsupercell | 2025-04-14T01:50:39Z | 21 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T01:47:31Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
- name: original_file_name
dtype: string
splits:
- name: train
num_bytes: 4587699186.25
num_examples: 86030
- name: validation
num_bytes: 604447838.75
num_examples: 10970
- name: test
num_bytes: 563759745.0
num_examples: 10720
download_size: 5749775478
dataset_size: 5755906770.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
kornwtp/hatespeech-fil-classification | kornwtp | 2024-12-23T05:14:47Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-17T15:39:27Z | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: texts
dtype: string
- name: labels
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 995915
num_examples: 10000
- name: test
num_bytes: 422427
num_examples: 4232
- name: validation
num_bytes: 424361
num_examples: 4232
download_size: 1253625
dataset_size: 1842703
---
# Dataset Card for "hatespeech-filipino"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ref: https://huggingface.co/datasets/legacy-datasets/hate_speech_filipino |
mappingUniverse/geospatial_data_coordinates | mappingUniverse | 2024-11-03T08:43:13Z | 24 | 0 | [
"license:mit",
"region:us"
] | [] | 2024-11-03T06:46:41Z | 0 | ---
configs:
- config_name: geojson
data_files:
- split: IT
path: "IT/GeoJSON/*.GeoJSON"
- split: US
path: "US/GeoJSON/*.GeoJSON"
license: mit
--- |
lookas/astra_grab_floor_toys | lookas | 2025-03-03T11:22:19Z | 43 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"astra"
] | [
"robotics"
] | 2025-03-03T11:19:44Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- astra
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": null,
"total_episodes": 50,
"total_frames": 73694,
"total_tasks": 1,
"total_videos": 150,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
18
],
"names": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17
]
},
"observation.state": {
"dtype": "float32",
"shape": [
18
],
"names": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17
]
},
"action.arm_l": {
"dtype": "float32",
"shape": [
6
],
"names": [
0,
1,
2,
3,
4,
5
]
},
"action.gripper_l": {
"dtype": "float32",
"shape": [
1
],
"names": [
0
]
},
"action.arm_r": {
"dtype": "float32",
"shape": [
6
],
"names": [
0,
1,
2,
3,
4,
5
]
},
"action.gripper_r": {
"dtype": "float32",
"shape": [
1
],
"names": [
0
]
},
"action.base": {
"dtype": "float32",
"shape": [
2
],
"names": [
0,
1,
2,
3,
4,
5
]
},
"action.eef_l": {
"dtype": "float32",
"shape": [
7
],
"names": [
0,
1,
2,
3,
4,
5,
6
]
},
"action.eef_r": {
"dtype": "float32",
"shape": [
7
],
"names": [
0,
1,
2,
3,
4,
5,
6
]
},
"action.head": {
"dtype": "float32",
"shape": [
2
],
"names": [
0,
1
]
},
"observation.state.arm_l": {
"dtype": "float32",
"shape": [
6
],
"names": [
0,
1,
2,
3,
4,
5
]
},
"observation.state.gripper_l": {
"dtype": "float32",
"shape": [
1
],
"names": [
0
]
},
"observation.state.arm_r": {
"dtype": "float32",
"shape": [
6
],
"names": [
0,
1,
2,
3,
4,
5
]
},
"observation.state.gripper_r": {
"dtype": "float32",
"shape": [
1
],
"names": [
0
]
},
"observation.state.base": {
"dtype": "float32",
"shape": [
2
],
"names": [
0,
1,
2,
3,
4,
5
]
},
"observation.state.eef_l": {
"dtype": "float32",
"shape": [
7
],
"names": [
0,
1,
2,
3,
4,
5,
6
]
},
"observation.state.eef_r": {
"dtype": "float32",
"shape": [
7
],
"names": [
0,
1,
2,
3,
4,
5,
6
]
},
"observation.state.odom": {
"dtype": "float32",
"shape": [
7
],
"names": [
0,
1,
2,
3,
4,
5,
6
]
},
"observation.state.head": {
"dtype": "float32",
"shape": [
2
],
"names": [
0,
1
]
},
"observation.images.head": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist_left": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist_right": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 360,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
juliadollis/trad_ai_medical_chatbot_16200 | juliadollis | 2025-02-21T19:47:00Z | 16 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-21T13:56:08Z | 0 | ---
dataset_info:
features:
- name: Description
dtype: string
- name: Patient
dtype: string
- name: Doctor
dtype: string
- name: Translated_Description
dtype: string
- name: Translated_Patient
dtype: string
- name: Translated_Doctor
dtype: string
splits:
- name: train
num_bytes: 6920107
num_examples: 2700
download_size: 2884750
dataset_size: 6920107
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
harpreetsahota/medium-blogs-example | harpreetsahota | 2025-01-25T18:41:45Z | 28 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-25T18:41:44Z | 0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: subtitle
dtype: string
- name: content
dtype: string
- name: claps
dtype: int64
- name: voters
dtype: int64
- name: wordcount
dtype: int64
- name: topics
sequence: string
- name: responses
dtype: int64
- name: URL
dtype: string
- name: published_at
dtype: timestamp[us]
- name: author_name
dtype: string
splits:
- name: train
num_bytes: 124247
num_examples: 13
download_size: 69529
dataset_size: 124247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tayyibsupercool/resource_allocation_telecom_energy_efficiency_rician_k_2_instruct_10k | tayyibsupercool | 2024-10-24T16:43:19Z | 18 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-13T20:48:04Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: sample_index
dtype: string
splits:
- name: train
num_bytes: 2639296
num_examples: 10000
- name: validation
num_bytes: 659764
num_examples: 2500
download_size: 315985
dataset_size: 3299060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Francis2003/fake_news_data | Francis2003 | 2025-06-24T14:42:40Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-24T14:11:29Z | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 20149126
num_examples: 8000
- name: dev
num_bytes: 2470435
num_examples: 1000
- name: test
num_bytes: 2428893
num_examples: 1000
download_size: 0
dataset_size: 25048454
---
# Dataset Card for "fake_news_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DS4H-ICTU/english-FULFULDE-ADAMAOUA | DS4H-ICTU | 2025-05-11T02:59:58Z | 126 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-02T20:04:51Z | 0 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 18496040
num_examples: 104448
download_size: 10025313
dataset_size: 18496040
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thenewsupercell/new_emotion_avg_pooling_DF_Audio_Embeddings | thenewsupercell | 2025-03-07T03:00:44Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-07T02:58:26Z | 0 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype: string
- name: original_file_name
dtype: string
- name: audio_embeddings
sequence: float32
splits:
- name: train
num_bytes: 3023882524.25
num_examples: 17206
- name: validation
num_bytes: 373249770.75
num_examples: 2194
- name: test
num_bytes: 369753644.0
num_examples: 2144
download_size: 2808039729
dataset_size: 3766885939.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
lwrf42/financial-sentiment-dataset | lwrf42 | 2025-04-18T13:06:09Z | 28 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-18T13:05:45Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 21769783
num_examples: 85698
- name: validation
num_bytes: 2415791
num_examples: 9522
download_size: 8057476
dataset_size: 24185574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mimir-project/mimir-core | mimir-project | 2025-03-13T15:56:13Z | 106 | 1 | [
"language:no",
"language:nb",
"language:nn",
"language:da",
"language:sv",
"language:is",
"language:en",
"size_categories:10M<n<100M",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [] | 2024-12-13T14:36:16Z | 0 | ---
language:
- 'no'
- nb
- nn
- da
- sv
- is
- en
size_categories:
- 10B<n<100B
configs:
- config_name: default
data_files:
- split: train
path: "data/train-*.json"
- split: validation
path: "data/validation-*.json"
- config_name: bad
data_files:
- split: train
path: "data/train-bad-*.json"
- split: validation
path: "data/validation-bad-*.json"
- config_name: medium
data_files:
- split: train
path: "data/train-medium-*.json"
- split: validation
path: "data/validation-medium-*.json"
- config_name: good
data_files:
- split: train
path: "data/train-good-*.json"
- split: validation
path: "data/validation-good-*.json"
--- |
mlfoundations-dev/d1_science_load_in_qwq_together | mlfoundations-dev | 2025-05-05T00:03:32Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:01:03Z | 0 | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: domain
dtype: string
- name: r1_response
dtype: string
- name: r1_reasoning_content
dtype: string
- name: extract_solution
dtype: string
- name: url
dtype: string
- name: filename
dtype: string
- name: success
dtype: bool
- name: page_count
dtype: int64
- name: page_number
dtype: int64
- name: question_choices_solutions
dtype: string
- name: extracted_question
dtype: string
- name: extracted_answer_choices
sequence: string
- name: matched_solution
dtype: string
- name: qa_validation_outputs
dtype: bool
- name: classifier_reasoning
dtype: string
- name: is_organic_chemistry
dtype: bool
- name: ms_id
dtype: int64
- name: qwq_thinking_trajectory
dtype: string
- name: qwq_attempt
dtype: string
- name: qwq_response
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 6382195675
num_examples: 63022
download_size: 2778892392
dataset_size: 6382195675
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abhinav302019/olympiad_data_10101 | abhinav302019 | 2025-03-05T22:28:47Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-05T22:28:45Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: Known_Solution
dtype: string
- name: Known_Answer
dtype: string
- name: Generated_Solution
dtype: string
- name: Generated_Answer
dtype: string
- name: Judge_Evaluation
dtype: string
- name: Judge_Rating
dtype: string
- name: Judge_Justification
dtype: string
splits:
- name: train
num_bytes: 33842
num_examples: 6
download_size: 39892
dataset_size: 33842
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_source_wiki_qa_found_on_google_133 | supergoose | 2025-02-25T19:30:09Z | 19 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-25T19:30:06Z | 0 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 73264465
num_examples: 85478
download_size: 32643786
dataset_size: 73264465
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alea-institute/kl3m-filter-data-dotgov-stats.bls.gov | alea-institute | 2025-02-03T21:00:27Z | 49 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-03T20:58:24Z | 0 | ---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: score
dtype: float64
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 3774782380
num_examples: 26806
download_size: 526809127
dataset_size: 3774782380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selfcorrexp2/llama3_openmath_1m_ep1_math_scaling_temp07 | selfcorrexp2 | 2025-01-09T02:31:00Z | 20 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-09T02:17:13Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: preds
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 1089062156
num_examples: 395000
download_size: 386449913
dataset_size: 1089062156
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vikhrmodels/musiccaps_quantized-wav-unify | Vikhrmodels | 2024-12-24T02:24:48Z | 36 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-23T01:07:21Z | 0 | ---
dataset_info:
features:
- name: ytid
dtype: string
- name: start_s
dtype: int64
- name: end_s
dtype: int64
- name: audioset_positive_labels
dtype: string
- name: aspect_list
dtype: string
- name: text
dtype: string
- name: author_id
dtype: int64
- name: is_balanced_subset
dtype: bool
- name: is_audioset_eval
dtype: bool
- name: download_status
dtype: bool
- name: audio_tokens
sequence:
sequence: int64
splits:
- name: train
num_bytes: 18135979
num_examples: 4829
- name: validation
num_bytes: 2019429
num_examples: 537
download_size: 4640117
dataset_size: 20155408
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
answerdotai/triviaqa_entailment | answerdotai | 2024-11-21T02:13:51Z | 30 | 1 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-15T16:51:27Z | 0 | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: context
dtype: string
- name: hypothesis
dtype: string
- name: labels
dtype: bool
- name: difficulty
dtype: string
- name: v1
dtype: string
- name: critique
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
splits:
- name: train
num_bytes: 383332428
num_examples: 23874
- name: validation
num_bytes: 95814663
num_examples: 5952
download_size: 53293582
dataset_size: 479147091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Biamterdex/LeRobot-worldwide-hackathon | Biamterdex | 2025-06-14T13:39:23Z | 52 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-14T13:36:30Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 65,
"total_frames": 94807,
"total_tasks": 1,
"total_videos": 195,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:65"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.gripper_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
datasaur-dev/PubMedQA-fine-tuned-llama-3-1-8B-labels | datasaur-dev | 2024-10-18T10:14:01Z | 18 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-18T10:13:40Z | 0 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: input
dtype: string
- name: final_decision
dtype: string
- name: long_answer
dtype: string
splits:
- name: train
num_bytes: 2703339
num_examples: 1000
download_size: 1071939
dataset_size: 2703339
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_8059fabe-414c-4c0b-8830-29ed7a8c231f | argilla-internal-testing | 2024-10-08T07:19:32Z | 19 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-08T07:19:31Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuhaotian/LLaVA-Instruct-150K | liuhaotian | 2024-01-03T01:59:20Z | 3,486 | 503 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | 2023-04-17T23:47:27Z | 0 | ---
license: cc-by-4.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: LLaVA Visual Instruct 150K
size_categories:
- 100K<n<1M
---
# LLaVA Visual Instruct 150K Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct 150K is a set of GPT-generated multimodal instruction-following data.
It is constructed for visual instruction tuning and for building large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct 150K was collected in April 2023, by prompting GPT-4-0314 API.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Creative Commons Attribution 4.0 International; and it should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_61a9519c-c935-485e-aa05-0bb290aa61bd | argilla-internal-testing | 2024-11-21T08:28:40Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-21T08:28:39Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
umiyuki/Ani-Bench-JP | umiyuki | 2025-04-02T06:37:14Z | 47 | 3 | [
"language:ja",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-28T05:19:50Z | 0 | ---
dataset_info:
features:
- name: 問題
dtype: string
- name: 答え
dtype: string
- name: 番組名
dtype: string
splits:
- name: test
num_bytes: 14789
num_examples: 100
download_size: 9376
dataset_size: 14789
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: mit
language:
- ja
---
# Ani-Bench-JP
## データセット概要
`Ani-Bench-JP` は、日本の人気アニメに関する知識を測定するためのベンチマーク用データセットです。このデータセットは、5つのアニメ作品(『魔法少女まどか☆マギカ』、『ぼっち・ざ・ろっく!』、『機動戦士ガンダム』、『HUNTER×HUNTER』、『新世紀エヴァンゲリオン』)からそれぞれ20問ずつ、合計100問のクイズ形式の問題で構成されています。
LLMのアニメに関する理解度を日本語で評価する用途を想定してます。
## データ構造
データはCSV形式で提供されており、`test` スプリットとしてアップロードされています。ファイルには以下の3つの列が含まれます:
- **問題**: アニメに関するクイズ形式の質問
- **答え**: その質問に対する正解
- **番組名**: 質問が関連するアニメ作品の名前
### 列の詳細
| 列名 | 説明 | 例 |
|--------|----------------------------|-----------------------------------------|
| 問題 | クイズの質問文 | 主人公の名前は何ですか? |
| 答え | 質問に対する正解 | 鹿目まどか |
| 番組名 | 関連するアニメのタイトル | 魔法少女まどか☆マギカ |
## 使用方法
このデータセットは、Hugging Faceの `datasets` ライブラリを使用して簡単にロードできます。以下はPythonでの例です:
```python
from datasets import load_dataset
dataset = load_dataset("umiyuki/Ani-Bench-JP", split="test")
print(dataset[0])
```
## 収録アニメ
- **魔法少女まどか☆マギカ**
- **ぼっち・ざ・ろっく!**
- **機動戦士ガンダム**
- **HUNTER×HUNTER**
- **新世紀エヴァンゲリオン**
各アニメから20問ずつ、合計100問が含まれています。
## 目的
- LLM(特に日本語)の理解力や知識の評価
## クレジット
このデータセットは、`umiyuki` によって作成されました。 |
im-wali/korea_wild_animal_mix | im-wali | 2025-01-25T03:45:36Z | 17 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-01-25T03:45:36Z | 0 | ---
license: apache-2.0
---
|
Yotofu/so100_sweeper_shoes | Yotofu | 2025-03-28T04:21:15Z | 101 | 1 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100"
] | [
"robotics"
] | 2025-03-27T13:22:11Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 774,
"total_frames": 2145169,
"total_tasks": 1,
"total_videos": 3096,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 29,
"splits": {
"train": "0:774"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/{video_key}_episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"observation.images.front_rgb": {
"dtype": "video",
"shape": [
512,
1024,
3
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.fps": 29.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.front_depth": {
"dtype": "video",
"shape": [
512,
1024,
1
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.fps": 29.0,
"video.codec": "hevc",
"video.pix_fmt": "gray",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.end_rgb": {
"dtype": "video",
"shape": [
512,
1024,
3
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.fps": 29.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.end_depth": {
"dtype": "video",
"shape": [
512,
1024,
1
],
"names": [
"height",
"width",
"channels"
],
"video_info": {
"video.fps": 29.0,
"video.codec": "hevc",
"video.pix_fmt": "gray",
"video.is_depth_map": false,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
zwang2/mgv_virus_host_pair | zwang2 | 2024-11-12T21:05:16Z | 14 | 0 | [
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-12T20:56:07Z | 0 | ---
dataset_info:
features:
- name: virus_id
dtype: 'null'
- name: host_id
dtype: 'null'
- name: virus_sequence
dtype: 'null'
- name: host_sequence
dtype: 'null'
- name: label
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1376
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
catslashbin/scp-foundation-items-with-summaries | catslashbin | 2024-12-17T01:57:34Z | 21 | 0 | [
"task_categories:text-generation",
"task_categories:summarization",
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"summarization"
] | 2024-12-17T01:46:34Z | 0 | ---
language:
- en
task_categories:
- text-generation
- summarization
size_categories:
- 1K<n<10K
---
This dataset contains 2,000 randomly selected items from the [scp1to7](https://www.kaggle.com/datasets/czzzzzzz/scp1to7) dataset.
Each SCP item is summarized using the following prompt with OpenAI `gpt-4o-mini-2024-07-18`:
```
<text>
{{text}}
</text>
Summarize the SCP item in the provided text with in 30 words.
Always start the summary with 'SCP-xxx is ...'.
Use simple, basic words that a 10-year-old child can easily understand.
Avoid jargon and keep the focus on the story of the SCP item.
```
This dataset can be leveraged to train models for:
- Generating SCP articles from short descriptions.
- Condensing SCP articles into concise, easy-to-understand summaries.
|
yzembodied/libero_10_image_task_1_2_3 | yzembodied | 2025-04-02T10:53:54Z | 21 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-04-02T10:53:01Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 150,
"total_frames": 38753,
"total_tasks": 3,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:150"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": null,
"features": {
"observation.images.image": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
]
},
"observation.images.wrist_image": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper",
"gripper"
]
}
},
"observation.state.joint": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"joint_1",
"joint_2",
"joint_3",
"joint_4",
"joint_5",
"joint_6",
"joint_7"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
afraamn/deepscaler_filtered_8238 | afraamn | 2025-05-14T19:20:19Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-14T19:20:16Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: dataset
sequence: string
- name: ground_truth
sequence: string
- name: quality
dtype: int64
- name: pass_at_k
dtype: float64
splits:
- name: train
num_bytes: 3213659
num_examples: 8238
download_size: 1397381
dataset_size: 3213659
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aoutir/hackai | aoutir | 2025-05-23T17:28:40Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T17:12:06Z | 0 | ---
dataset_info:
features:
- name: comments
dtype: string
splits:
- name: train
num_bytes: 10246
num_examples: 24
download_size: 7558
dataset_size: 10246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hackai"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DataoceanAI/137712_hours_Multilingual_Corpus_for_Dolphin_ASR_Model | DataoceanAI | 2025-04-03T06:28:40Z | 10 | 3 | [
"language:zh",
"language:ja",
"language:th",
"language:ru",
"language:ko",
"language:id",
"language:vi",
"language:hi",
"language:ur",
"language:ms",
"language:uz",
"language:ar",
"language:fa",
"language:bn",
"language:ta",
"language:te",
"language:ug",
"language:gu",
"language:my",
"language:tl",
"language:kk",
"language:or",
"language:ne",
"language:mn",
"language:km",
"language:jv",
"language:lo",
"language:si",
"language:pa",
"language:ba",
"language:ks",
"language:tg",
"language:su",
"language:mr",
"language:az",
"region:us"
] | [] | 2025-04-03T06:03:37Z | 0 | ---
language:
- zh
- ja
- th
- ru
- ko
- id
- vi
- hi
- ur
- ms
- uz
- ar
- fa
- bn
- ta
- te
- ug
- gu
- my
- tl
- kk
- or
- ne
- mn
- km
- jv
- lo
- si
- pa
- ba
- ks
- tg
- su
- mr
- az
---
## Duration
137,712 hours
## Languages
38 Eastern Languages + 22 Chinese
## Description
This dataset is an integration of our vast, high-quality commercial dataset collections, encompassing a total of 137,712 hours of audio across 38 Eastern languages. Additionally, it includes 22 Chinese dialects.
The dataset is carefully annotated and covers a wide variety of languages, scenarios, and contexts, ensuring diversity and richness in the data.
This broad coverage allows for comprehensive model training, with a focus on Eastern languages.



|
arjunashok/climate-1day-zeroshot-without_context | arjunashok | 2025-01-07T17:00:11Z | 9 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-07T17:00:09Z | 0 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: input_text_time
dtype: string
- name: output_text_time
dtype: string
- name: output_time
dtype: string
- name: input_num
sequence:
sequence: float64
- name: output_num
sequence:
sequence: float64
- name: instruction-1
dtype: string
- name: instruction-2
dtype: string
- name: instruction-3
dtype: string
- name: instruction-4
dtype: string
- name: pred_output_case1
dtype: string
- name: pred_output_case2
dtype: string
- name: pred_output_case3
dtype: string
- name: pred_output_case4
dtype: string
splits:
- name: train
num_bytes: 16134298
num_examples: 2896
- name: valid
num_bytes: 2181810
num_examples: 362
- name: test
num_bytes: 2722663
num_examples: 363
download_size: 6761146
dataset_size: 21038771
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
anirudhb11/R1-1.5b-Par-Temp-0.7-Ans-40-16384-s-42-deg-64-path-3-n-16000-s-1200-e-1300 | anirudhb11 | 2025-06-08T03:36:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-06-08T03:36:23Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: gold_answer
dtype: string
- name: raw_answer_0
dtype: string
- name: extracted_answer_0
dtype: string
- name: num_boxed_0
dtype: int64
- name: grade_0
dtype: bool
- name: ans_token_len_0
dtype: int64
- name: finished_0
dtype: bool
- name: raw_answer_1
dtype: string
- name: extracted_answer_1
dtype: string
- name: num_boxed_1
dtype: int64
- name: grade_1
dtype: bool
- name: ans_token_len_1
dtype: int64
- name: finished_1
dtype: bool
- name: raw_answer_2
dtype: string
- name: extracted_answer_2
dtype: string
- name: num_boxed_2
dtype: int64
- name: grade_2
dtype: bool
- name: ans_token_len_2
dtype: int64
- name: finished_2
dtype: bool
- name: raw_answer_3
dtype: string
- name: extracted_answer_3
dtype: string
- name: num_boxed_3
dtype: int64
- name: grade_3
dtype: bool
- name: ans_token_len_3
dtype: int64
- name: finished_3
dtype: bool
- name: raw_answer_4
dtype: string
- name: extracted_answer_4
dtype: string
- name: num_boxed_4
dtype: int64
- name: grade_4
dtype: bool
- name: ans_token_len_4
dtype: int64
- name: finished_4
dtype: bool
- name: raw_answer_5
dtype: string
- name: extracted_answer_5
dtype: string
- name: num_boxed_5
dtype: int64
- name: grade_5
dtype: bool
- name: ans_token_len_5
dtype: int64
- name: finished_5
dtype: bool
- name: raw_answer_6
dtype: string
- name: extracted_answer_6
dtype: string
- name: num_boxed_6
dtype: int64
- name: grade_6
dtype: bool
- name: ans_token_len_6
dtype: int64
- name: finished_6
dtype: bool
- name: raw_answer_7
dtype: string
- name: extracted_answer_7
dtype: string
- name: num_boxed_7
dtype: int64
- name: grade_7
dtype: bool
- name: ans_token_len_7
dtype: int64
- name: finished_7
dtype: bool
- name: raw_answer_8
dtype: string
- name: extracted_answer_8
dtype: string
- name: num_boxed_8
dtype: int64
- name: grade_8
dtype: bool
- name: ans_token_len_8
dtype: int64
- name: finished_8
dtype: bool
- name: raw_answer_9
dtype: string
- name: extracted_answer_9
dtype: string
- name: num_boxed_9
dtype: int64
- name: grade_9
dtype: bool
- name: ans_token_len_9
dtype: int64
- name: finished_9
dtype: bool
- name: raw_answer_10
dtype: string
- name: extracted_answer_10
dtype: string
- name: num_boxed_10
dtype: int64
- name: grade_10
dtype: bool
- name: ans_token_len_10
dtype: int64
- name: finished_10
dtype: bool
- name: raw_answer_11
dtype: string
- name: extracted_answer_11
dtype: string
- name: num_boxed_11
dtype: int64
- name: grade_11
dtype: bool
- name: ans_token_len_11
dtype: int64
- name: finished_11
dtype: bool
- name: raw_answer_12
dtype: string
- name: extracted_answer_12
dtype: string
- name: num_boxed_12
dtype: int64
- name: grade_12
dtype: bool
- name: ans_token_len_12
dtype: int64
- name: finished_12
dtype: bool
- name: raw_answer_13
dtype: string
- name: extracted_answer_13
dtype: string
- name: num_boxed_13
dtype: int64
- name: grade_13
dtype: bool
- name: ans_token_len_13
dtype: int64
- name: finished_13
dtype: bool
- name: raw_answer_14
dtype: string
- name: extracted_answer_14
dtype: string
- name: num_boxed_14
dtype: int64
- name: grade_14
dtype: bool
- name: ans_token_len_14
dtype: int64
- name: finished_14
dtype: bool
- name: raw_answer_15
dtype: string
- name: extracted_answer_15
dtype: string
- name: num_boxed_15
dtype: int64
- name: grade_15
dtype: bool
- name: ans_token_len_15
dtype: int64
- name: finished_15
dtype: bool
- name: raw_answer_16
dtype: string
- name: extracted_answer_16
dtype: string
- name: num_boxed_16
dtype: int64
- name: grade_16
dtype: bool
- name: ans_token_len_16
dtype: int64
- name: finished_16
dtype: bool
- name: raw_answer_17
dtype: string
- name: extracted_answer_17
dtype: string
- name: num_boxed_17
dtype: int64
- name: grade_17
dtype: bool
- name: ans_token_len_17
dtype: int64
- name: finished_17
dtype: bool
- name: raw_answer_18
dtype: string
- name: extracted_answer_18
dtype: string
- name: num_boxed_18
dtype: int64
- name: grade_18
dtype: bool
- name: ans_token_len_18
dtype: int64
- name: finished_18
dtype: bool
- name: raw_answer_19
dtype: string
- name: extracted_answer_19
dtype: string
- name: num_boxed_19
dtype: int64
- name: grade_19
dtype: bool
- name: ans_token_len_19
dtype: int64
- name: finished_19
dtype: bool
- name: raw_answer_20
dtype: string
- name: extracted_answer_20
dtype: string
- name: num_boxed_20
dtype: int64
- name: grade_20
dtype: bool
- name: ans_token_len_20
dtype: int64
- name: finished_20
dtype: bool
- name: raw_answer_21
dtype: string
- name: extracted_answer_21
dtype: string
- name: num_boxed_21
dtype: int64
- name: grade_21
dtype: bool
- name: ans_token_len_21
dtype: int64
- name: finished_21
dtype: bool
- name: raw_answer_22
dtype: string
- name: extracted_answer_22
dtype: string
- name: num_boxed_22
dtype: int64
- name: grade_22
dtype: bool
- name: ans_token_len_22
dtype: int64
- name: finished_22
dtype: bool
- name: raw_answer_23
dtype: string
- name: extracted_answer_23
dtype: string
- name: num_boxed_23
dtype: int64
- name: grade_23
dtype: bool
- name: ans_token_len_23
dtype: int64
- name: finished_23
dtype: bool
- name: raw_answer_24
dtype: string
- name: extracted_answer_24
dtype: string
- name: num_boxed_24
dtype: int64
- name: grade_24
dtype: bool
- name: ans_token_len_24
dtype: int64
- name: finished_24
dtype: bool
- name: raw_answer_25
dtype: string
- name: extracted_answer_25
dtype: string
- name: num_boxed_25
dtype: int64
- name: grade_25
dtype: bool
- name: ans_token_len_25
dtype: int64
- name: finished_25
dtype: bool
- name: raw_answer_26
dtype: string
- name: extracted_answer_26
dtype: string
- name: num_boxed_26
dtype: int64
- name: grade_26
dtype: bool
- name: ans_token_len_26
dtype: int64
- name: finished_26
dtype: bool
- name: raw_answer_27
dtype: string
- name: extracted_answer_27
dtype: string
- name: num_boxed_27
dtype: int64
- name: grade_27
dtype: bool
- name: ans_token_len_27
dtype: int64
- name: finished_27
dtype: bool
- name: raw_answer_28
dtype: string
- name: extracted_answer_28
dtype: string
- name: num_boxed_28
dtype: int64
- name: grade_28
dtype: bool
- name: ans_token_len_28
dtype: int64
- name: finished_28
dtype: bool
- name: raw_answer_29
dtype: string
- name: extracted_answer_29
dtype: string
- name: num_boxed_29
dtype: int64
- name: grade_29
dtype: bool
- name: ans_token_len_29
dtype: int64
- name: finished_29
dtype: bool
- name: raw_answer_30
dtype: string
- name: extracted_answer_30
dtype: string
- name: num_boxed_30
dtype: int64
- name: grade_30
dtype: bool
- name: ans_token_len_30
dtype: int64
- name: finished_30
dtype: bool
- name: raw_answer_31
dtype: string
- name: extracted_answer_31
dtype: string
- name: num_boxed_31
dtype: int64
- name: grade_31
dtype: bool
- name: ans_token_len_31
dtype: int64
- name: finished_31
dtype: bool
- name: raw_answer_32
dtype: string
- name: extracted_answer_32
dtype: string
- name: num_boxed_32
dtype: int64
- name: grade_32
dtype: bool
- name: ans_token_len_32
dtype: int64
- name: finished_32
dtype: bool
- name: raw_answer_33
dtype: string
- name: extracted_answer_33
dtype: string
- name: num_boxed_33
dtype: int64
- name: grade_33
dtype: bool
- name: ans_token_len_33
dtype: int64
- name: finished_33
dtype: bool
- name: raw_answer_34
dtype: string
- name: extracted_answer_34
dtype: string
- name: num_boxed_34
dtype: int64
- name: grade_34
dtype: bool
- name: ans_token_len_34
dtype: int64
- name: finished_34
dtype: bool
- name: raw_answer_35
dtype: string
- name: extracted_answer_35
dtype: string
- name: num_boxed_35
dtype: int64
- name: grade_35
dtype: bool
- name: ans_token_len_35
dtype: int64
- name: finished_35
dtype: bool
- name: raw_answer_36
dtype: string
- name: extracted_answer_36
dtype: string
- name: num_boxed_36
dtype: int64
- name: grade_36
dtype: bool
- name: ans_token_len_36
dtype: int64
- name: finished_36
dtype: bool
- name: raw_answer_37
dtype: string
- name: extracted_answer_37
dtype: string
- name: num_boxed_37
dtype: int64
- name: grade_37
dtype: bool
- name: ans_token_len_37
dtype: int64
- name: finished_37
dtype: bool
- name: raw_answer_38
dtype: string
- name: extracted_answer_38
dtype: string
- name: num_boxed_38
dtype: int64
- name: grade_38
dtype: bool
- name: ans_token_len_38
dtype: int64
- name: finished_38
dtype: bool
- name: raw_answer_39
dtype: string
- name: extracted_answer_39
dtype: string
- name: num_boxed_39
dtype: int64
- name: grade_39
dtype: bool
- name: ans_token_len_39
dtype: int64
- name: finished_39
dtype: bool
splits:
- name: train
num_bytes: 79416092
num_examples: 100
download_size: 17818412
dataset_size: 79416092
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Lansechen/details_Qwen__Qwen2.5-7B | Lansechen | 2025-03-28T03:31:14Z | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-28T03:13:02Z | 0 | ---
pretty_name: Evaluation run of Qwen/Qwen2.5-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).\n\nThe dataset is composed\
\ of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe\
\ dataset has been created from 3 run(s). Each run can be found as a specific split\
\ in each configuration, the split being named using the timestamp of the run.The\
\ \"train\" split is always pointing to the latest results.\n\nAn additional configuration\
\ \"results\" store all the aggregated results of the run.\n\nTo load the details\
\ from a run, you can for instance do the following:\n```python\nfrom datasets import\
\ load_dataset\ndata = load_dataset(\"Lansechen/details_Qwen__Qwen2.5-7B\",\n\t\"\
results\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest\
\ results from run 2025-03-28T11:31:03.198625](https://huggingface.co/datasets/Lansechen/details_Qwen__Qwen2.5-7B/blob/main/results_2025-03-28T11-31-03.198625.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"extractive_match\": 0.1,\n\
\ \"extractive_match_stderr\": 0.055708601453115555\n },\n \"custom|aime24|0\"\
: {\n \"extractive_match\": 0.1,\n \"extractive_match_stderr\": 0.055708601453115555\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Qwen/Qwen2.5-7B
configs:
- config_name: custom_aime24_0
data_files:
- split: 2025_03_28T11_31_03.198625
path:
- '**/details_custom|aime24|0_2025-03-28T11-31-03.198625.parquet'
- split: latest
path:
- '**/details_custom|aime24|0_2025-03-28T11-31-03.198625.parquet'
- config_name: custom_gpqa_diamond_0
data_files:
- split: 2025_03_28T11_13_01.819468
path:
- '**/details_custom|gpqa:diamond|0_2025-03-28T11-13-01.819468.parquet'
- split: latest
path:
- '**/details_custom|gpqa:diamond|0_2025-03-28T11-13-01.819468.parquet'
- config_name: custom_math_500_0
data_files:
- split: 2025_03_28T11_23_10.770535
path:
- '**/details_custom|math_500|0_2025-03-28T11-23-10.770535.parquet'
- split: latest
path:
- '**/details_custom|math_500|0_2025-03-28T11-23-10.770535.parquet'
- config_name: results
data_files:
- split: 2025_03_28T11_13_01.819468
path:
- results_2025-03-28T11-13-01.819468.parquet
- split: 2025_03_28T11_23_10.770535
path:
- results_2025-03-28T11-23-10.770535.parquet
- split: 2025_03_28T11_31_03.198625
path:
- results_2025-03-28T11-31-03.198625.parquet
- split: latest
path:
- results_2025-03-28T11-31-03.198625.parquet
---
# Dataset Card for Evaluation run of Qwen/Qwen2.5-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("Lansechen/details_Qwen__Qwen2.5-7B",
"results",
split="train")
```
## Latest results
These are the [latest results from run 2025-03-28T11:31:03.198625](https://huggingface.co/datasets/Lansechen/details_Qwen__Qwen2.5-7B/blob/main/results_2025-03-28T11-31-03.198625.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"extractive_match": 0.1,
"extractive_match_stderr": 0.055708601453115555
},
"custom|aime24|0": {
"extractive_match": 0.1,
"extractive_match_stderr": 0.055708601453115555
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,394