datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
Derur/all-portable-apps-and-ai-in-one-url | Derur | 2025-05-01T10:20:18Z | 77,610 | 4 | [
"language:en",
"language:ru",
"language:multilingual",
"region:us",
"portable",
"portables",
"AI",
"Apps",
"4eJIoBek",
"AINetSD",
"Derur",
"Daswer123",
"NeuroDonu",
"NeuroPort",
"Neurogen",
"OreX",
"XpucT",
"repack",
"repacks",
"Archive",
"CGPlugins",
"Nirsoft",
"NNMClub",
"PortableApps",
"PortApps"
] | [] | 2025-02-12T20:05:40Z | null | ---
tags:
- portable
- portables
- AI
- Apps
- 4eJIoBek
- AINetSD
- Derur
- Daswer123
- NeuroDonu
- NeuroPort
- Neurogen
- OreX
- XpucT
- repack
- repacks
- Archive
- CGPlugins
- Nirsoft
- NNMClub
- PortableApps
- PortApps
language:
- en
- ru
- multilingual
---
**Saving you time and space on HHD!**
**Экономлю ваше время и место на диске!**
"-cl" = clear (no models) / очишенное (без моделей)
Моя личная подборка портативных приложений и ИИ!
Перепаковывал и уменьшал размер архивов лично я!
Поддержите меня: [**Boosty**](https://boosty.to/dreammine) или [**Donationalerts**](https://www.donationalerts.com/r/derur_dreammine)
My personal selection of portable apps and AI's!
I personally repacked and reduced the size of the archives!
Support me: [**Boosty**](https://boosty.to/dreammine) or [**Donationalerts**](https://www.donationalerts.com/r/derur_dreammine)
Files authors / Авторы файлов:
- 4eJIoBek: [**ITCH-URL**](https://gz1k.itch.io/ai-portable-tools) **/** [**HF-URL**](https://huggingface.co/datasets/4eJIoBek/PAIT-Downloads)
- AINetSD: [**TG.BOT-URL**](http://t.me/AINetSD_bot) **/** [**TG.CHAT-URL**](https://t.me/AINetSD_Group)
- CGPlugins: [**TG-URL**](https://t.me/cgplugin)
- Derur(me): [**BOOSTY-URL**](https://boosty.to/dreammine) **/** [**GITHUB-URL**](https://github.com/DerurDreammine)
- Daswer123: [**GITHUB-URL**](https://github.com/daswer123) **/** [**HF-URL**](https://huggingface.co/daswer123) **/** [**BOOSTY-URL**](https://boosty.to/daswerr)
- NeuroDonu: [**HF-URL**](https://huggingface.co/NeuroDonu) **/** [**TG-URL**](https://t.me/webinquisitor) **/** [**GITHUB-URL**](https://github.com/NeuroDonu)
- NeuroPort: [**TG-URL**](https://t.me/neuroport)
- Neurogen: [**BOOSTY-URL**](https://boosty.to/neurogen) **/** [**TG-URL**](https://t.me/neurogen_news) **/** [**GITHUB-URL**](https://github.com/neurogen-dev)
- OreX(stabledif): [**BOOSTY-URL**](https://boosty.to/stabledif) **/** [**TG-URL**](https://t.me/stable_dif)
- XpucT(Хачатур): [**BOOSTY-URL**](https://boosty.to/xpuct) **/** [**TG-URL**](https://t.me/win10tweaker) **/** [**GITHUB-URL**](https://github.com/XpucT) **/** [**DISCORD-URL**](https://discord.gg/xpuct)
Sites / Сайты:
- Archive.org: [**URL**](https://archive.org/details/software?tab=collection&query=portable&page=7&and%5B%5D=subject%3A%22Windows%22&and%5B%5D=mediatype%3A%22software%22¬%5B%5D=collection%3A%22gamegear_library%22)
- Nirsoft: [**URL**](https://www.nirsoft.net)
- NNMClub: [**URL**](https://nnmclub.to)
- PortableApps: [**URL**](https://portableapps.com)
- PortApps: [**URL**](https://portapps.io/apps/)
I also recommend the sites / Еще советую сайты:
- RePackMe: [**URL**](https://repack.me)
- Taiwebs: [**URL-EN**](https://en.taiwebs.com) **/** [**URL-RU**](https://ru.taiwebs.com)
- RSLoad: [**URL**](https://rsload.net)
|
MaxiiMin/custom-simple-scaling | MaxiiMin | 2025-05-01T10:19:53Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T10:19:48Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2926394
num_examples: 70
download_size: 1090186
dataset_size: 2926394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HaruthaiAi/vangogh_girl_in_the_waves_vs_tree_oil_6tech_match | HaruthaiAi | 2025-05-01T10:19:22Z | 0 | 0 | [
"license:creativeml-openrail-m",
"region:us"
] | [] | 2025-05-01T10:15:33Z | null | ---
license: creativeml-openrail-m
---
Dataset Title: vangogh_girl_in_the_waves_vs_tree_oil_6tech_match
Preview Description (for AI researchers and model trainers):
This dataset presents a high-resolution, multi-modal comparative analysis between Girl in the Waves (1885) by Vincent van Gogh and The Tree Oil Painting (Undated; attributed). Designed to support advanced AI-based visual learning and forensic pattern recognition, the dataset contains aligned imagery, structural analysis outputs, and scientific pigment data from both paintings.
The core of the dataset is built upon six specialized techniques:
1. Brush Stroke Matching – highlighting directional torque and pressure curves.
2. Fourier Transform Analysis – revealing shared rhythmic flow and spatial energy clusters.
3. Edge Detection with Gabor Filters – mapping stroke contours and structural buildup.
4. Pigment Composition Mapping – isolating chromatic correspondence, particularly red ochres and browns.
5. Infrared Spectrum Simulation – identifying underdrawings and sketch layers.
6. AI Deep Learning Feature Matching – detecting 328 high-confidence structural similarities.
Included within the dataset are full-resolution image pairs, X-ray views, and cross-referenced annotations. Scientific validations were conducted via XRF, SEM-EDS, UV, Synchrotron, and FTIR analysis across institutions in Thailand, Taiwan, and Switzerland.
The goal is to provide AI systems and researchers with a benchmarkable, verifiable visual corpus to test model sensitivity to painterly logic, stylistic lineage, and material authenticity.
Ideal for:
Training and fine-tuning visual neural networks
Cross-domain feature alignment
AI explainability studies in fine art attribution
Multimodal learning in heritage science and digital humanities
This dataset is open to reprocessing, model-aided reevaluation, and citation in academic or generative AI research. All visual files are properly licensed or under scientific review for academic fair use.
Keywords: Van Gogh, early period, visual matching, pigment aging, tree painting, structural alignment, neural networks, cultural heritage AI
数据集标题: vangogh_girl_in_the_waves_vs_tree_oil_6tech_match
预览说明(供 AI 研究人员和模型训练者使用):
本数据集提供了对比文森特·梵高(Vincent van Gogh)1885 年作品《浪中的女孩》(Girl in the Waves)与一幅未署名但已进行科学鉴定的《树之油画》(The Tree Oil Painting)的高分辨率、多模态比较分析。
该数据集面向高级 AI 视觉学习与图像鉴定模型开发,包含成对图像、结构分析图层及颜料科学数据,并经过跨国科研验证。
核心内容基于六种专业分析技术:
1. 笔触匹配分析 —— 比较笔压方向与力量的规律。
2. 傅里叶变换分析 —— 揭示节奏能量分布与空间频率共振。
3. 边缘检测与 Gabor 滤波 —— 描绘结构笔触轮廓的变化路径。
4. 颜料成分映射 —— 着重分析红赭石与棕色色调的一致性。
5. 红外光谱模拟 —— 揭示潜藏的初步素描线稿与光线过渡。
6. AI 深度学习特征匹配 —— 检测出 328 个高置信度相似结构特征点。
数据集中包含完整分辨率的图像配对、X 光透视图、叠加注释层,所依据的科学验证包括:XRF、SEM-EDS、UV、同步辐射光谱(台湾)、FTIR 分析与碳年代测定(瑞士 ETH)。
目标是为 AI 模型与研究人员提供一个可验证的标准视觉语料库,用于测试模型对绘画逻辑、风格传承与材料真实性的敏感度。
适用场景:
训练与微调视觉神经网络
跨领域图像特征对齐
艺术归属中的 AI 可解释性研究
人文遗产领域的多模态学习
本数据集支持再处理、模型辅助再评估,并可用于学术或生成式 AI 研究中的引用。所有图像均获得许可或处于学术审查的合理使用范围内。
关键词: 梵高、早期作品、图像匹配、颜料老化、树画、结构对齐、神经网络、文化遗产 AI
|
carminho/dsl_tl_fewshot | carminho | 2025-05-01T10:08:59Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T10:08:54Z | null | ---
dataset_info:
features:
- name: examples
dtype: string
splits:
- name: 2shot
num_bytes: 1080266
num_examples: 2000
- name: 3shot
num_bytes: 2357154
num_examples: 3000
download_size: 2131812
dataset_size: 3437420
configs:
- config_name: default
data_files:
- split: 2shot
path: data/2shot-*
- split: 3shot
path: data/3shot-*
---
|
GeorgyGUF/Liquid-Metal-sdxl-lora-training-data | GeorgyGUF | 2025-05-01T10:02:32Z | 0 | 0 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"text-to-image",
"lora",
"diffusers",
"template:diffusion-lora"
] | [] | 2025-05-01T09:52:57Z | null | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
size_categories:
- n<1K
---
training data provided by https://civitai.com/models/1529052/liquid-metal
used in: https://huggingface.co/GeorgyGUF/Liquid-Metal-sdxl-lora |
elliotthwang/sharegpt_gpt4_Zh_dataset_1000_traditional | elliotthwang | 2025-05-01T09:43:39Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T09:43:36Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 4178410
num_examples: 1000
download_size: 2230632
dataset_size: 4178410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
elliotthwang/sharegpt_gpt4_Zh_dataset_1000 | elliotthwang | 2025-05-01T09:36:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T09:34:27Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 4006323.3603279344
num_examples: 1000
download_size: 2236703
dataset_size: 4006323.3603279344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
beyoru/SFT_tool_calling | beyoru | 2025-05-01T09:29:20Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T09:29:18Z | null | ---
dataset_info:
features:
- name: reasoning
dtype: string
- name: answer
dtype: string
- name: system
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 11935231
num_examples: 2197
download_size: 4066794
dataset_size: 11935231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
elliotthwang/sharegpt_gpt4_Zh_dataset | elliotthwang | 2025-05-01T09:26:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T09:26:28Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 223165783.05785426
num_examples: 40008
download_size: 85598661
dataset_size: 223165783.05785426
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
svjack/Toradora_Videos_Omni_Captioned_0 | svjack | 2025-05-01T09:25:38Z | 0 | 0 | [
"size_categories:1K<n<10K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-01T08:42:25Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---


 |
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_16 | HungVu2003 | 2025-05-01T09:03:09Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T09:03:07Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4077833
num_examples: 12498
download_size: 1130977
dataset_size: 4077833
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Zhoumingjin/IntelligentConstruction3 | Zhoumingjin | 2025-05-01T08:59:29Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-01T08:58:52Z | null | ---
license: apache-2.0
---
|
kothasuhas/gold-1B-150K-gens-4-30 | kothasuhas | 2025-05-01T08:46:08Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:44:37Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 350802122
num_examples: 150000
- name: validation
num_bytes: 2293563
num_examples: 1000
download_size: 252668362
dataset_size: 353095685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Moamen-dcp/arazn_codeSwitched_mp3_full_processing_prepared_4_whisperMedium | Moamen-dcp | 2025-05-01T08:40:20Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:38:26Z | null | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3212440264
num_examples: 3344
- name: test
num_bytes: 1412200560
num_examples: 1470
- name: dev
num_bytes: 1346889608
num_examples: 1402
download_size: 1491924490
dataset_size: 5971530432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
---
|
zuhdifr/algorithm_selector_fjssp_10_20_jobs | zuhdifr | 2025-05-01T08:35:59Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:21:42Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 128978343
num_examples: 9914
- name: val
num_bytes: 14388511
num_examples: 1102
- name: test
num_bytes: 16353542
num_examples: 1225
download_size: 23963404
dataset_size: 159720396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
BIT-MJY/test_tube_pick | BIT-MJY | 2025-05-01T08:32:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:32:22Z | null | ---
dataset_info:
features:
- name: image
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 332501.80144879816
num_examples: 2733
- name: val
num_bytes: 18492.599275600922
num_examples: 152
- name: test
num_bytes: 18492.599275600922
num_examples: 152
download_size: 31559
dataset_size: 369487.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
RyanYr/brm-dapo-qwen2.5math-1.5B-base-lr2.5e-6-beta0.002_matheval | RyanYr | 2025-05-01T08:27:37Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-01T07:20:18Z | null | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: responses
sequence: string
- name: gt_ans
dtype: string
- name: extracted_solution
sequence: string
- name: rm_scores
sequence: bool
- name: avg_accuracy
dtype: float64
- name: pass_accuracy
dtype: bool
- name: cons_accuracy
dtype: float64
splits:
- name: mixed.810
num_bytes: 5984252
num_examples: 1447
- name: math_eval_aime24.810
num_bytes: 3335451
num_examples: 30
- name: mixed.800
num_bytes: 5936403
num_examples: 1447
- name: math_eval_aime24.800
num_bytes: 3321371
num_examples: 30
- name: mixed.760
num_bytes: 5928537
num_examples: 1447
- name: math_eval_aime24.760
num_bytes: 3401929
num_examples: 30
- name: mixed.720
num_bytes: 5855350
num_examples: 1447
- name: math_eval_aime24.720
num_bytes: 3410952
num_examples: 30
- name: mixed.680
num_bytes: 5912069
num_examples: 1447
- name: math_eval_aime24.680
num_bytes: 3282146
num_examples: 30
- name: mixed.640
num_bytes: 5911306
num_examples: 1447
- name: math_eval_aime24.640
num_bytes: 3360120
num_examples: 30
- name: mixed.600
num_bytes: 6102073
num_examples: 1447
- name: math_eval_aime24.600
num_bytes: 3554151
num_examples: 30
- name: mixed.560
num_bytes: 5987367
num_examples: 1447
- name: math_eval_aime24.560
num_bytes: 3405695
num_examples: 30
- name: mixed.520
num_bytes: 6006131
num_examples: 1447
- name: math_eval_aime24.520
num_bytes: 3572933
num_examples: 30
- name: mixed.480
num_bytes: 6011350
num_examples: 1447
- name: math_eval_aime24.480
num_bytes: 3475149
num_examples: 30
- name: mixed.440
num_bytes: 5898479
num_examples: 1447
- name: math_eval_aime24.440
num_bytes: 3306748
num_examples: 30
- name: mixed.400
num_bytes: 5941459
num_examples: 1447
- name: math_eval_aime24.400
num_bytes: 3452724
num_examples: 30
- name: mixed.360
num_bytes: 5956106
num_examples: 1447
- name: math_eval_aime24.360
num_bytes: 3407865
num_examples: 30
- name: mixed.320
num_bytes: 5971284
num_examples: 1447
- name: math_eval_aime24.320
num_bytes: 3424401
num_examples: 30
- name: mixed.280
num_bytes: 5932854
num_examples: 1447
- name: math_eval_aime24.280
num_bytes: 3488693
num_examples: 30
- name: mixed.240
num_bytes: 5995457
num_examples: 1447
- name: math_eval_aime24.240
num_bytes: 3477211
num_examples: 30
- name: mixed.200
num_bytes: 5957038
num_examples: 1447
- name: math_eval_aime24.200
num_bytes: 3367546
num_examples: 30
- name: mixed.160
num_bytes: 6043613
num_examples: 1447
- name: math_eval_aime24.160
num_bytes: 3495956
num_examples: 30
- name: mixed.120
num_bytes: 5919953
num_examples: 1447
- name: math_eval_aime24.120
num_bytes: 3291197
num_examples: 30
- name: mixed.80
num_bytes: 5820387
num_examples: 1447
- name: math_eval_aime24.80
num_bytes: 3284668
num_examples: 30
- name: mixed.40
num_bytes: 5780933
num_examples: 1447
- name: math_eval_aime24.40
num_bytes: 3476410
num_examples: 30
download_size: 69949044
dataset_size: 196445717
configs:
- config_name: default
data_files:
- split: mixed.810
path: data/mixed.810-*
- split: math_eval_aime24.810
path: data/math_eval_aime24.810-*
- split: mixed.800
path: data/mixed.800-*
- split: math_eval_aime24.800
path: data/math_eval_aime24.800-*
- split: mixed.760
path: data/mixed.760-*
- split: math_eval_aime24.760
path: data/math_eval_aime24.760-*
- split: mixed.720
path: data/mixed.720-*
- split: math_eval_aime24.720
path: data/math_eval_aime24.720-*
- split: mixed.680
path: data/mixed.680-*
- split: math_eval_aime24.680
path: data/math_eval_aime24.680-*
- split: mixed.640
path: data/mixed.640-*
- split: math_eval_aime24.640
path: data/math_eval_aime24.640-*
- split: mixed.600
path: data/mixed.600-*
- split: math_eval_aime24.600
path: data/math_eval_aime24.600-*
- split: mixed.560
path: data/mixed.560-*
- split: math_eval_aime24.560
path: data/math_eval_aime24.560-*
- split: mixed.520
path: data/mixed.520-*
- split: math_eval_aime24.520
path: data/math_eval_aime24.520-*
- split: mixed.480
path: data/mixed.480-*
- split: math_eval_aime24.480
path: data/math_eval_aime24.480-*
- split: mixed.440
path: data/mixed.440-*
- split: math_eval_aime24.440
path: data/math_eval_aime24.440-*
- split: mixed.400
path: data/mixed.400-*
- split: math_eval_aime24.400
path: data/math_eval_aime24.400-*
- split: mixed.360
path: data/mixed.360-*
- split: math_eval_aime24.360
path: data/math_eval_aime24.360-*
- split: mixed.320
path: data/mixed.320-*
- split: math_eval_aime24.320
path: data/math_eval_aime24.320-*
- split: mixed.280
path: data/mixed.280-*
- split: math_eval_aime24.280
path: data/math_eval_aime24.280-*
- split: mixed.240
path: data/mixed.240-*
- split: math_eval_aime24.240
path: data/math_eval_aime24.240-*
- split: mixed.200
path: data/mixed.200-*
- split: math_eval_aime24.200
path: data/math_eval_aime24.200-*
- split: mixed.160
path: data/mixed.160-*
- split: math_eval_aime24.160
path: data/math_eval_aime24.160-*
- split: mixed.120
path: data/mixed.120-*
- split: math_eval_aime24.120
path: data/math_eval_aime24.120-*
- split: mixed.80
path: data/mixed.80-*
- split: math_eval_aime24.80
path: data/math_eval_aime24.80-*
- split: mixed.40
path: data/mixed.40-*
- split: math_eval_aime24.40
path: data/math_eval_aime24.40-*
---
|
yyf314/recovery2 | yyf314 | 2025-05-01T08:26:33Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:24:56Z | null | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 11196
num_examples: 100
download_size: 9027
dataset_size: 11196
---
|
stevenoh2003/so100_replay | stevenoh2003 | 2025-05-01T08:21:26Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100_2",
"tutorial"
] | [
"robotics"
] | 2025-05-01T08:21:20Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100_2
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 445,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
tarsur909/summarize_sft-test_lm-pythia1b-oai-summary-ppo-1ep-translated-seperated_42_250_64 | tarsur909 | 2025-05-01T08:19:46Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:19:44Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_input_ids
sequence: int64
- name: query_attention_mask
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_input_ids
sequence: int64
- name: reference_response_attention_mask
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_input_ids
sequence: int64
- name: query_reference_response_attention_mask
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: model_response
dtype: string
splits:
- name: test
num_bytes: 6837456
num_examples: 250
download_size: 1141270
dataset_size: 6837456
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
konwoo/test-llp-6 | konwoo | 2025-05-01T08:11:37Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:11:33Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
splits:
- name: train
num_bytes: 8582979
num_examples: 1000
download_size: 5589832
dataset_size: 8582979
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stevenoh2003/so100_go | stevenoh2003 | 2025-05-01T08:11:34Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100_2",
"tutorial"
] | [
"robotics"
] | 2025-05-01T08:11:22Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100_2
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 9,
"total_frames": 3910,
"total_tasks": 1,
"total_videos": 9,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:9"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_1_for_gen_8 | HungVu2003 | 2025-05-01T08:11:26Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:11:23Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3593422
num_examples: 12500
download_size: 1885186
dataset_size: 3593422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_0_for_gen_8 | HungVu2003 | 2025-05-01T08:03:13Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T08:03:11Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7406084
num_examples: 12500
download_size: 1951353
dataset_size: 7406084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
konwoo/test-llp-5 | konwoo | 2025-05-01T07:57:35Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:57:31Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float32
- name: q_log_probs
dtype: float32
splits:
- name: train
num_bytes: 8582979
num_examples: 1000
download_size: 5589832
dataset_size: 8582979
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jackie68666/ChinaTax | Jackie68666 | 2025-05-01T07:55:28Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T05:01:34Z | null | ---
license: apache-2.0
---
|
ahmetsinan/testmunir | ahmetsinan | 2025-05-01T07:53:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:52:15Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1543.2
num_examples: 4
- name: test
num_bytes: 382
num_examples: 1
download_size: 7422
dataset_size: 1925.2
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Jianshu001/R1_distilled_brain_teasers_filtered | Jianshu001 | 2025-05-01T07:51:04Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:51:01Z | null | ---
dataset_info:
features:
- name: puzzle_id
dtype: string
- name: reconstruction
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: distrator1
dtype: string
- name: distrator2
dtype: string
- name: unsure
dtype: string
- name: DSR1_reasoning_content
dtype: string
- name: DSR1_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: answerKey
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: original_question
dtype: string
- name: has_forbidden
dtype: bool
splits:
- name: train
num_bytes: 24033616
num_examples: 2345
download_size: 10953558
dataset_size: 24033616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/gold-37M-150K-gens-4-30 | kothasuhas | 2025-05-01T07:48:53Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:47:10Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 356686424
num_examples: 150000
- name: validation
num_bytes: 2321309
num_examples: 1000
download_size: 211049343
dataset_size: 359007733
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
prithivMLmods/Deepfake-vs-Real-60K | prithivMLmods | 2025-05-01T07:48:50Z | 0 | 3 | [
"task_categories:image-classification",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:image",
"doi:10.57967/hf/5313",
"region:us",
"Deepfake",
"Real",
"0-Fake",
"1-Real",
"art",
"60,000",
"Facial",
"Portrait"
] | [
"image-classification"
] | 2025-05-01T04:08:15Z | 3 | ---
license: apache-2.0
task_categories:
- image-classification
language:
- en
tags:
- Deepfake
- Real
- 0-Fake
- 1-Real
- art
- 60,000
- Facial
- Portrait
size_categories:
- 10K<n<100K
---

# Deepfake-vs-Real-60K
**Deepfake-vs-Real-60K** is a large-scale image classification dataset designed to distinguish between deepfake and real facial images. The dataset includes approximately **60,000 high-quality images**, comprising **30,000 fake (deepfake)** and **30,000 real** images, to support the development of robust deepfake detection models.
By providing a well-balanced and diverse collection, Deepfake-vs-Real-60K aims to enhance classification accuracy and improve generalization for AI-based deepfake detection systems.
## Label Mappings
- **ID to Label**:
`{0: 'Fake', 1: 'Real'}`
- **Label to ID**:
`{'Fake': 0, 'Real': 1}`
## Dataset Composition
The Deepfake-vs-Real-60K dataset is composed of modular subsets derived from:
- `Deepfakes-QA-Patch1`
- `Deepfakes-QA-Patch2`
These curated subsets ensure high diversity and quality, allowing models trained on this dataset to perform effectively across varied real-world scenarios.
## Key Features
- ~30,000 **Deepfake** images (label `0`)
- ~30,000 **Real** images (label `1`)
- Designed for **image classification tasks**
- Supports **training, evaluation,** and **benchmarking** of deepfake detection models
- Ensures **balanced** class distribution and **high-quality samples**
## Citation
If you use this dataset in your research or project, please cite it as follows:
```bibtex
@misc{prithiv_sakthi_2025,
author = { Prithiv Sakthi },
title = { Deepfake-vs-Real-60K (Revision 1c14d74) },
year = 2025,
url = { https://huggingface.co/datasets/prithivMLmods/Deepfake-vs-Real-60K },
doi = { 10.57967/hf/5313 },
publisher = { Hugging Face }
}
```
## License
This dataset is licensed under the **Apache License 2.0**.
For more details, see the [license](https://www.apache.org/licenses/LICENSE-2.0).
## Dataset Page
Explore and download the dataset here:
[https://huggingface.co/datasets/prithivMLmods/Deepfake-vs-Real-60K](https://huggingface.co/datasets/prithivMLmods/Deepfake-vs-Real-60K) |
chiyuanhsiao/text_L2-regular-TTS_spoken-web-questions | chiyuanhsiao | 2025-05-01T07:42:05Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:42:01Z | null | ---
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: my_prediction_text
dtype: string
splits:
- name: test
num_bytes: 38722260
num_examples: 2032
download_size: 4334583
dataset_size: 38722260
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ismielabir/Quantum_Gate_Performance_Evaluation | ismielabir | 2025-05-01T07:31:27Z | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"noisy",
"error_rate",
"quantum_gate",
"QML",
"ML",
"circuit_optimization"
] | [] | 2025-05-01T07:21:39Z | null | ---
license: cc-by-4.0
language:
- en
tags:
- noisy
- error_rate
- quantum_gate
- QML
- ML
- circuit_optimization
size_categories:
- 10K<n<100K
---
# 🧪 Quantum Gate Performance Dataset
### 📘 Title:
**Comprehensive Quantum Gate Performance Analysis: A Comparative Study of Noise and No-Noise Effects**
### 📂 Dataset Description:
This repository contains benchmarking results for 13 quantum gates (e.g., H, CNOT, Toffoli) tested under noisy and noise-free conditions, based on 1000 simulation runs per gate configuration. Total 26000 rows and 13 columns.
📊 Features include:
- Gate Type
- Execution Time
- Error Rate
- Fidelity
- Energy Consumption
- Quantum Volume
- Noise Model used
### 🔍 Use Cases:
- Quantum Machine Learning (QML)
- Circuit optimization
- Noise modeling
- Educational demos
### 🔗 DOI and Original Source:
Originally published on Mendeley Data:
[https://doi.org/10.17632/kf5mbvft5t.1](https://doi.org/10.17632/kf5mbvft5t.1)
---
### 📄 License:
Released under [Creative Commons Attribution 4.0 (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) |
HungVu2003/opt-350m_beta_0.0_alpha_0.6_num-company_3_dataset_1_for_gen_9 | HungVu2003 | 2025-05-01T07:23:36Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:23:35Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1285822
num_examples: 12500
download_size: 726926
dataset_size: 1285822
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Hkang/summarize_sft-test_lm-EleutherAI_pythia-1b_seed-42_numex-250_lr3e8_9K-BON_32 | Hkang | 2025-05-01T07:10:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:10:56Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_input_ids
sequence: int64
- name: query_attention_mask
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_input_ids
sequence: int64
- name: reference_response_attention_mask
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_input_ids
sequence: int64
- name: query_reference_response_attention_mask
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: model_response
dtype: string
splits:
- name: test
num_bytes: 6852996
num_examples: 250
download_size: 1150757
dataset_size: 6852996
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
adlbh/HAMChildRaw | adlbh | 2025-05-01T07:07:51Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T07:07:50Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: lesion_id
dtype: string
- name: dx
dtype: string
- name: dx_type
dtype: string
- name: age
dtype: float64
- name: sex
dtype: string
- name: localization
dtype: string
splits:
- name: train
num_bytes: 7872820.0
num_examples: 326
download_size: 7692879
dataset_size: 7872820.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_att_iter0_att10_sol5_dedup | cchoi1 | 2025-05-01T06:57:49Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T06:57:47Z | null | ---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 23018524
num_examples: 1921
download_size: 4825963
dataset_size: 23018524
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
konwoo/test-llp-3 | konwoo | 2025-05-01T06:52:29Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T06:52:10Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float16
- name: q_log_probs
dtype: float16
splits:
- name: train
num_bytes: 8578979
num_examples: 1000
download_size: 5589025
dataset_size: 8578979
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ssktora/scifact-train-bm25-pyserini | ssktora | 2025-05-01T06:14:00Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T06:13:46Z | null | ---
dataset_info:
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43181027
num_examples: 809
download_size: 20844644
dataset_size: 43181027
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
konwoo/test-llp-2 | konwoo | 2025-05-01T06:08:08Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T06:07:55Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: p_log_probs
dtype: float16
- name: q_log_probs
dtype: float16
splits:
- name: train
num_bytes: 8578979
num_examples: 1000
download_size: 5589025
dataset_size: 8578979
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentn1410/Financial_Reasoning | nguyentn1410 | 2025-05-01T05:50:38Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T05:50:31Z | null | ---
dataset_info:
features:
- name: Contexs
dtype: string
- name: Questions
dtype: string
- name: Response
dtype: string
- name: Complex_CoT
dtype: string
splits:
- name: train
num_bytes: 31273655
num_examples: 5499
download_size: 13059522
dataset_size: 31273655
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kwangchaeko/eval_act_koch_test_2_100000 | kwangchaeko | 2025-05-01T05:27:02Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-01T05:26:46Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 3,
"total_frames": 2401,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 15,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 15.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kothasuhas/multi-gold-37M-e1-N1.50M-mix8-iter8 | kothasuhas | 2025-05-01T05:24:57Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T05:23:32Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3220187982
num_examples: 1500000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 2196587240
dataset_size: 3228762961
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
test-gen/mbpp_mbpp-dagger-easy-qwen-coder-0.5b-instruct-from-sft_t0.0_n1_generated_tests | test-gen | 2025-05-01T04:50:47Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:50:46Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 304205
num_examples: 500
download_size: 134102
dataset_size: 304205
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
test-gen/mbpp_mbpp-dagger-qwen-coder-0.5b-instruct-from-sft_t0.0_n1_generated_tests | test-gen | 2025-05-01T04:48:56Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:48:55Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 297987
num_examples: 500
download_size: 132261
dataset_size: 297987
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
test-gen/mbpp_mbpp-qwen-coder-0.5b-instruct-from-sft_t0.0_n1_generated_tests | test-gen | 2025-05-01T04:47:06Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:47:01Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 312117
num_examples: 500
download_size: 138582
dataset_size: 312117
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
immindich/qwen-7b-r1-corrupted-answers | immindich | 2025-05-01T04:38:46Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:38:45Z | null | ---
dataset_info:
features:
- name: sample_idx
dtype: int64
- name: example_idx
dtype: int64
- name: corruption_idx
dtype: int64
- name: tag
dtype: string
- name: answers_clean
sequence: string
- name: answers_corrupted
sequence: string
splits:
- name: train
num_bytes: 15290904
num_examples: 440
download_size: 5633649
dataset_size: 15290904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
brandonyang/stackthree_d0 | brandonyang | 2025-05-01T04:19:35Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T04:18:29Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 254810,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
brandonyang/stack_d1 | brandonyang | 2025-05-01T04:18:41Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T04:17:38Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 108233,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
brandonyang/square_d1 | brandonyang | 2025-05-01T04:17:52Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T04:16:50Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 152400,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
zktmp/gen-t1n32-numina_train-0to100k-grpo_pt | zktmp | 2025-05-01T04:14:03Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:09:59Z | null | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: input_text
dtype: string
- name: input_token_ids
sequence: int64
- name: input_len
dtype: int64
- name: input_id
dtype: int64
- name: gt_answer
dtype: string
- name: output_text
dtype: string
- name: output_token_ids
sequence: int64
- name: output_len
dtype: int64
- name: output_id
dtype: int64
- name: answer
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 25596508196
num_examples: 3276800
download_size: 3907199545
dataset_size: 25596508196
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GitBag/gsm8k_size_1.5_eval | GitBag | 2025-05-01T04:11:16Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:11:11Z | null | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: answer
dtype: string
- name: index
dtype: int64
- name: question
dtype: string
- name: split
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 192590030
num_examples: 7473
download_size: 91763376
dataset_size: 192590030
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/mbpp_Qwen2.5-Coder-0.5B-Instruct_t1.0_n8_generated_tests_updated | test-gen | 2025-05-01T04:05:39Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T04:04:38Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
- name: new_verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: validation
num_bytes: 297617
num_examples: 90
- name: train
num_bytes: 2377445
num_examples: 374
download_size: 1022695
dataset_size: 2675062
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
---
|
brandonyang/square_d0 | brandonyang | 2025-05-01T04:05:16Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T03:53:52Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 153477,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.agentview_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "image",
"shape": [
84,
84,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper_1, gripper_2"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"x",
"y",
"z",
"roll",
"pitch",
"yaw",
"gripper"
]
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_15 | HungVu2003 | 2025-05-01T03:57:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:57:36Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2783065
num_examples: 12498
download_size: 1514625
dataset_size: 2783065
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kwangchaeko/koch_test_3 | kwangchaeko | 2025-05-01T03:55:32Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"koch",
"tutorial"
] | [
"robotics"
] | 2025-05-01T03:54:42Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- koch
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 10,
"total_frames": 17543,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
ma921/imdb-tokenized_noise20 | ma921 | 2025-05-01T03:49:26Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:49:10Z | null | ---
dataset_info:
features:
- name: pos_input_ids
sequence: int64
- name: neg_input_ids
sequence: int64
- name: pos_reward
dtype: float64
- name: neg_reward
dtype: float64
splits:
- name: train
num_bytes: 63292832
num_examples: 10000
download_size: 15123034
dataset_size: 63292832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/imdb-tokenized_noise10 | ma921 | 2025-05-01T03:48:35Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:48:26Z | null | ---
dataset_info:
features:
- name: pos_input_ids
sequence: int64
- name: neg_input_ids
sequence: int64
- name: pos_reward
dtype: float64
- name: neg_reward
dtype: float64
splits:
- name: train
num_bytes: 63292832
num_examples: 10000
download_size: 14723224
dataset_size: 63292832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_qwen2.5-coder-0.5b_temp0.1_num8_tests_mbpp_mbpp-sft-qwen-coder-0.5b_t0.0_n1 | test-gen | 2025-05-01T03:36:28Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:36:26Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: execution_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5821868
num_examples: 500
download_size: 1113769
dataset_size: 5821868
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_0_for_gen_7 | HungVu2003 | 2025-05-01T03:35:09Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:35:06Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 5254018
num_examples: 12500
download_size: 1793110
dataset_size: 5254018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AmazonScience/migration-bench-java-demo | AmazonScience | 2025-05-01T03:32:40Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.08703",
"region:us",
"coding",
"code-migration",
"java",
"amazon",
"amazon-science",
"aws"
] | [] | 2025-04-30T22:00:50Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: repo
dtype: string
- name: base_commit
dtype: string
- name: num_java_files
dtype: int32
- name: num_loc
dtype: int32
- name: num_pom_xml
dtype: int32
- name: num_src_test_java_files
dtype: int32
- name: num_test_cases
dtype: int32
- name: license
dtype: string
splits:
- name: test
num_examples: 3
programming_languages: [java]
tags:
- coding
- code-migration
- java
- amazon
- amazon-science
- aws
---
# MIGRATION-BENCH
<table>
<tr>
<td style="padding: 0;">
<a href="https://huggingface.co/collections/AmazonScience/migrationbench-68125452fc21a4564b92b6c3">
<img src="https://img.shields.io/badge/-MIGRATION--BENCH-4d5eff?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor" alt="MIGRATION-BENCH">
</a>
</td>
<td style="padding: 0;">
<a href="https://github.com/amazon-science/SWE-PolyBench">
<img src="https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white" alt="Code">
</a>
</td>
<td style="padding: 0;">
<a href="https://arxiv.org/abs/2504.08703">
<img src="https://img.shields.io/badge/arXiv-2504.08703-b31b1b.svg?style=for-the-badge" alt="arXiv">
</a>
</td>
<td style="padding: 0; padding-left: 10px; vertical-align: middle;">
<a href="https://huggingface.co/datasets/AmazonScience/migration-bench-java-full">
<img src="https://img.shields.io/badge/-java--full-8a98ff?style=flat&logo=huggingface&logoColor=ffffff&labelColor" alt="java-full">
</a>
</td>
<td style="padding: 0; vertical-align: middle;">
<a href="https://huggingface.co/datasets/AmazonScience/migration-bench-java-selected">
<img src="https://img.shields.io/badge/-java--selected-8a98ff?style=flat&logo=huggingface&logoColor=ffffff&labelColor" alt="java-selected">
</a>
</td>
<td style="padding: 0; vertical-align: middle;">
<a href="https://huggingface.co/datasets/AmazonScience/migration-bench-java-utg">
<img src="https://img.shields.io/badge/-java--utg-8a98ff?style=flat&logo=huggingface&logoColor=ffffff&labelColor" alt="java-utg">
</a>
</td>
</tr>
</table>
## 1. 📖 Overview
[MIGRATION-BENCH](https://github.com/amazon-science/SWE-PolyBench)
is code migration benchmark dataset at the **repository** level,
across multiple programming languages.
- Current and initial release includes `java 8` repositories with the `maven` build system, as of May 2025.
## 2. Datasets
There are three datasets in [🤗 MIGRATION-BENCH](https://huggingface.co/collections/AmazonScience/migrationbench-68125452fc21a4564b92b6c3):
| Index | Dataset | Size | Notes |
|-------|-----------------------------------------------|-------|-----------------------------------------------------------------------------------------------------|
| 1 | [🤗 `AmazonScience/migration-bench-java-full`](https://huggingface.co/datasets/AmazonScience/migration-bench-java-full) | 5,102 | Each repo has a test directory or at least one test case |
| 2 | [🤗 `AmazonScience/migration-bench-java-selected`](https://huggingface.co/datasets/AmazonScience/migration-bench-java-selected) | 300 | A **subset** of [🤗 `migration-bench-java-full`](https://huggingface.co/datasets/AmazonScience/migration-bench-java-full) |
| 3 | [🤗 `AmazonScience/migration-bench-java-utg`](https://huggingface.co/datasets/AmazonScience/migration-bench-java-selected) | 4,814 | The unit test generation (utg) dataset, **disjoint** with [🤗 `migration-bench-java-full`](https://huggingface.co/datasets/AmazonScience/migration-bench-java-full)|
All repositories are licensed under `MIT` or `Apache-2.0`.
## 3. Metadata
Metadata is provided in the `csv` file for each dataset:
1. `repo (str)`: The original repo URL without the `https://github.com/` prefix
1. `base_commit (str)`: Base commit id
- At this commit with `java 8` and `maven`, the repository is able to (1) compile and (2) pass existing unit tests and integration tests if any
- It is the starting point for code migration from `java 8`
1. `num_java_files (int)`: Number of `*.java` files in the repository at `base_commit`, similarly for all other `num_*` columns
1. `num_loc (int)`: Lines of code for the repository
1. `num_pom_xml (int)`: Number of modules (`pom.xml` files) in the repository
1. `num_src_test_java_files (int)`: Number of `*.java` files in the dedicated `src/test/` directory
1. `num_test_cases (int)`: Number of test cases, based on running the `mvn -f test .` command in the root directory
- Non negative values indicate number of test cases is parsed correctly from the output
- Negative values means it's unable to parse the output: `[INFO] Results:` (`-2`) or `[INFO] Tests run:` (`-1`) regex is missing
1. `license (str)`: The license of the repository, either `MIT` or `Apache2.0` for the whole dataset
### 3.1 Parsing Test Cases
Here is a sample `maven -f test .` command output to get valid `num_test_cases`:
```
...
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
...
```
## 📚 Citation
|
AdversarialRLHF/rloo_pythia410m_tldr6.9b_rm410mdata_mergedsft0.3_prefix_nokl_checkpoint-26_eval-dataset | AdversarialRLHF | 2025-05-01T03:28:08Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:28:03Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: generations_checkpoint-26
dtype: string
splits:
- name: validation
num_bytes: 127995753
num_examples: 6447
download_size: 33757153
dataset_size: 127995753
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
relai-ai/legal-scenarios-SCOTUS-2024-decisions | relai-ai | 2025-05-01T03:22:41Z | 0 | 0 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"reasoning-datasets-competition"
] | [
"question-answering"
] | 2025-05-01T03:12:28Z | null | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- reasoning-datasets-competition
---
# Purpose and scope
This dataset evaluates an LLM's reasoning ability in a legal context. Each question presents a realistic scenario involving competing legal principals,
and asks the LLM to present a correct legal resolution with sufficient justification based on precedent. The dataset was created using slip opinions of
the US Supreme Court from the 2024 term, taken from the [Supreme Court website](https://www.supremecourt.gov/opinions/slipopinion/24).
# Dataset Creation Method
The benchmark was created using RELAI’s data agent. For more details on the methodology and tools used, please visit [relai.ai](https://relai.ai).
# Example Uses
The benchmark can be used to evaluate the performance of large language models or incorporated into their post-training processes.
# Limitations and Biases
The benchmark has been created using RELAI’s data agent. Since samples are grounded in the underlying documents, any biases present in those source
documents are inherently reflected in the benchmark.
# License
License: CC BY 4.0
This dataset is licensed under the Creative Commons Attribution 4.0 International License.
You are free to share and adapt the material for any purpose, even commercially,
provided appropriate credit is given.
Attribution: © RELAI Inc.
License details: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/) |
SWE-bench/SWE-smith-trajectories | SWE-bench | 2025-05-01T03:18:28Z | 0 | 2 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code",
"agents",
"software-engineering"
] | [] | 2025-04-29T20:43:29Z | 2 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 479188645
num_examples: 5016
download_size: 146906769
dataset_size: 479188645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
language:
- en
tags:
- code
- agents
- software-engineering
size_categories:
- 1K<n<10K
---
<div align="center">
<a href="https://swesmith.com">
<img src="https://avatars.githubusercontent.com/u/189315905?s=200&v=4" alt="Logo" width="200">
<h1 align="center">SWE-smith Trajectories</h1>
</a>
</div>
This dataset contains the 5017 trajectories we fine-tuned Qwen 2.5 Coder Instruct on, leading to
[SWE-agent-LM-32B](https://huggingface.co/SWE-bench/SWE-agent-LM-32B), a coding LM agent that
achieve 40.2% on SWE-bench Verified (no verifiers or multiple rollouts, just 1 attempt per instance).
Trajectories were generated by running SWE-agent + Claude 3.7 Sonnet on task instances from
the SWE-smith [dataset](https://huggingface.co/datasets/SWE-bench/SWE-smith). |
HungVu2003/opt-350m_beta_0.0_alpha_0.6_num-company_3_dataset_0_for_gen_8 | HungVu2003 | 2025-05-01T03:18:20Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:18:18Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7071455
num_examples: 12500
download_size: 1873006
dataset_size: 7071455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ToniDO/test_latexformula | ToniDO | 2025-05-01T03:16:43Z | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T02:56:09Z | null | ---
license: mit
dataset_info:
features:
- name: latex
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10558701.875
num_examples: 1105
download_size: 10553568
dataset_size: 10558701.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kwangchaeko/eval_act_koch_test_080000 | kwangchaeko | 2025-05-01T03:12:18Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-01T03:12:09Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "koch",
"total_episodes": 1,
"total_frames": 1144,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
4
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
YJ-Seo/test_ur5_v2 | YJ-Seo | 2025-05-01T03:11:59Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-01T03:11:45Z | null | ---
dataset_info:
features:
- name: tcp_pose_rotvec
sequence: float32
length: 7
- name: frame_index
dtype: int64
- name: episode_index
dtype: int64
- name: index
dtype: int64
- name: timestamp
dtype: float32
- name: next.done
dtype: bool
- name: next.success
dtype: bool
- name: action
sequence: float32
length: 7
- name: observation.images.base.rgb
dtype: video_frame
- name: observation.images.base.depth
dtype: video_frame
- name: observation.images.test1.rgb
dtype: video_frame
- name: observation.images.test1.depth
dtype: video_frame
- name: observation.images.test2.rgb
dtype: video_frame
- name: observation.images.test2.depth
dtype: video_frame
- name: observation.state
sequence: float32
length: 7
splits:
- name: train
num_bytes: 213366
num_examples: 437
download_size: 50504
dataset_size: 213366
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdversarialRLHF/rloo_pythia410m_tldr6.9b_rm410mdata_mergedsft0.3_prefix_nokl_checkpoint-255_eval-dataset | AdversarialRLHF | 2025-05-01T03:10:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:10:27Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_token
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: generations_checkpoint-255
dtype: string
splits:
- name: validation
num_bytes: 128494424
num_examples: 6447
download_size: 33772205
dataset_size: 128494424
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
user074/concat_cleaned_gsm8k_math_5 | user074 | 2025-05-01T03:07:21Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T03:07:17Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 84456755
num_examples: 17447
download_size: 22457109
dataset_size: 84456755
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ying15/s1K-1.1-cod-tokenized-v3 | ying15 | 2025-05-01T02:49:55Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T02:49:47Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 21184457
num_examples: 1000
download_size: 9475768
dataset_size: 21184457
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SWE-bench/SWE-smith | SWE-bench | 2025-05-01T02:44:18Z | 0 | 2 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"code",
"agents",
"software-engineering"
] | [
"text-generation"
] | 2025-04-29T20:16:33Z | 2 | ---
dataset_info:
features:
- name: instance_id
dtype: string
- name: repo
dtype: string
- name: patch
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: created_at
dtype: string
- name: image_name
dtype: string
- name: base_commit
dtype: string
- name: problem_statement
dtype: string
splits:
- name: train
num_bytes: 5040247353
num_examples: 50137
download_size: 253578293
dataset_size: 5040247353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-generation
language:
- en
tags:
- code
- agents
- software-engineering
size_categories:
- 10K<n<100K
---
<div align="center">
<a href="https://swesmith.com/">
<img src="https://avatars.githubusercontent.com/u/189315905?s=200&v=4" alt="Logo" width="200">
<h1 align="center">SWE-smith Dataset</h1>
</a>
</div>
The SWE-smith Dataset is a training dataset of 50137 task instances from 128 GitHub repositories, collected using the SWE-smith toolkit.
It is the largest dataset to date for training software engineering agents.
All SWE-smith task instances come with an executable environment.
To learn more about how to use this dataset to train Language Models for Software Engineering, please refer to the [documentation](https://swesmith.com/docs). |
yzha/Nemotron_Nano_sharegpt | yzha | 2025-05-01T02:12:06Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:48:06Z | null | ---
dataset_info:
features:
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 71363483444
num_examples: 4417292
download_size: 30801486649
dataset_size: 71363483444
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_14 | HungVu2003 | 2025-05-01T02:11:25Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T02:11:24Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3676384
num_examples: 12498
download_size: 1115239
dataset_size: 3676384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm2_gen1_W_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-01T02:10:15Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T02:10:11Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 9637393
num_examples: 17000
download_size: 5763719
dataset_size: 9637393
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/insight_evalsft_vllm | Asap7772 | 2025-05-01T02:02:05Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T02:01:54Z | null | ---
dataset_info:
features:
- name: joint_prompt
dtype: string
- name: paper1_prompt
dtype: string
- name: paper2_prompt
dtype: string
- name: no_context_prompt
dtype: string
- name: abstracts
sequence: string
- name: forum_id_1
dtype: string
- name: forum_id_2
dtype: string
- name: pair_id
dtype: string
- name: query
dtype: string
- name: response
sequence: string
splits:
- name: train
num_bytes: 1938424
num_examples: 100
download_size: 802685
dataset_size: 1938424
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashishpandian/clear_inference | ashishpandian | 2025-05-01T02:01:20Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:41:41Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: full_solution
dtype: string
- name: task1_block
dtype: string
- name: action
dtype: string
splits:
- name: test
num_bytes: 489872
num_examples: 40
download_size: 88583
dataset_size: 489872
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
kothasuhas/multi-gold-37M-e1-N1.50M-mix8-iter5-accum | kothasuhas | 2025-05-01T01:57:47Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:55:16Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5232359418
num_examples: 1500000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 2795555858
dataset_size: 5240934397
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
yzha/R1_distilled_brain_teasers | yzha | 2025-05-01T01:45:56Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:36:44Z | null | ---
dataset_info:
features:
- name: puzzle_id
dtype: string
- name: reconstruction
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: distrator1
dtype: string
- name: distrator2
dtype: string
- name: unsure
dtype: string
- name: DSR1_reasoning_content
dtype: string
- name: DSR1_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: answerKey
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: original_question
dtype: string
splits:
- name: train
num_bytes: 41002904
num_examples: 3793
download_size: 18873757
dataset_size: 41002904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yzha/Nemotron-Nano_Reasoning-V1 | yzha | 2025-05-01T01:21:47Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T00:44:51Z | null | ---
dataset_info:
features:
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 92179131732
num_examples: 5863068
download_size: 39942450836
dataset_size: 92179131732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
psyonp/ablation__drop_top20pct__num_tokens_response | psyonp | 2025-05-01T01:06:01Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:05:59Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: num_tokens_question
dtype: int64
- name: num_tokens_response
dtype: int64
- name: semantic_similarity
dtype: float64
- name: sentiment_question
dtype: float64
- name: sentiment_response
dtype: float64
- name: readability_question
dtype: float64
- name: readability_response
dtype: float64
- name: ttr_question
dtype: float64
- name: ttr_response
dtype: float64
- name: toxicity_question
dtype: float64
- name: toxicity_response
dtype: float64
- name: euclidean_distance
dtype: float64
- name: kl_divergence
dtype: float64
splits:
- name: train
num_bytes: 373917
num_examples: 427
download_size: 154301
dataset_size: 373917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
psyonp/ablation__drop_top20pct__ttr_question | psyonp | 2025-05-01T01:05:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:05:55Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: num_tokens_question
dtype: int64
- name: num_tokens_response
dtype: int64
- name: semantic_similarity
dtype: float64
- name: sentiment_question
dtype: float64
- name: sentiment_response
dtype: float64
- name: readability_question
dtype: float64
- name: readability_response
dtype: float64
- name: ttr_question
dtype: float64
- name: ttr_response
dtype: float64
- name: toxicity_question
dtype: float64
- name: toxicity_response
dtype: float64
- name: euclidean_distance
dtype: float64
- name: kl_divergence
dtype: float64
splits:
- name: train
num_bytes: 416964
num_examples: 476
download_size: 170266
dataset_size: 416964
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
psyonp/ablation__drop_bottom20pct__ttr_response | psyonp | 2025-05-01T01:05:55Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T01:05:53Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: num_tokens_question
dtype: int64
- name: num_tokens_response
dtype: int64
- name: semantic_similarity
dtype: float64
- name: sentiment_question
dtype: float64
- name: sentiment_response
dtype: float64
- name: readability_question
dtype: float64
- name: readability_response
dtype: float64
- name: ttr_question
dtype: float64
- name: ttr_response
dtype: float64
- name: toxicity_question
dtype: float64
- name: toxicity_response
dtype: float64
- name: euclidean_distance
dtype: float64
- name: kl_divergence
dtype: float64
splits:
- name: train
num_bytes: 335114
num_examples: 382
download_size: 139023
dataset_size: 335114
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm2_gen0_W_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-01T00:49:22Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T00:49:13Z | null | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 9145932
num_examples: 16000
download_size: 5492110
dataset_size: 9145932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_14 | HungVu2003 | 2025-05-01T00:46:36Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T00:46:15Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2785813
num_examples: 12498
download_size: 1518615
dataset_size: 2785813
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ydeng9/OpenVLThinker_SFT_seed_iter2_new_filtered | ydeng9 | 2025-05-01T00:20:21Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:49:31Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: images
sequence: image
- name: instruction
dtype: string
- name: response
dtype: string
- name: image_url
dtype: string
splits:
- name: train
num_bytes: 75543428.75
num_examples: 2594
download_size: 61681942
dataset_size: 75543428.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mothnaZl/l-sr-Qwen2.5-7B-385b0.5-1155b-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions | mothnaZl | 2025-05-01T00:19:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:48:36Z | null | ---
dataset_info:
config_name: mothnaZl_minerva_math--T-0--top_p-1.0--n-1--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
splits:
- name: train
num_bytes: 108
num_examples: 1
download_size: 6024
dataset_size: 108
configs:
- config_name: mothnaZl_minerva_math--T-0--top_p-1.0--n-1--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0--top_p-1.0--n-1--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals/train-*
---
|
hjshah/bfcl | hjshah | 2025-05-01T00:16:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T00:15:58Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: multi_turn
dtype: bool
- name: functions
dtype: string
- name: missed_functions
dtype: string
- name: initial_config
dtype: string
- name: involved_classes
sequence: string
- name: turns
dtype: string
- name: language
dtype: string
- name: test_category
dtype: string
- name: subset
dtype: string
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 63846322
num_examples: 4441
download_size: 7639281
dataset_size: 63846322
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.6_num-company_3_dataset_1_for_gen_6 | HungVu2003 | 2025-04-30T23:39:51Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:39:50Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3525861
num_examples: 12500
download_size: 1858580
dataset_size: 3525861
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liweijiang/helpsteer3_v1 | liweijiang | 2025-04-30T23:27:00Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:26:45Z | null | ---
dataset_info:
features:
- name: domain
dtype: string
- name: language
dtype: string
- name: context
list:
- name: content
dtype: string
- name: role
dtype: string
- name: response1
dtype: string
- name: response2
dtype: string
- name: overall_preference
dtype: int64
- name: individual_preference
list:
- name: feedback1
dtype: string
- name: feedback2
dtype: string
- name: reasoning
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 167635699
num_examples: 17707
download_size: 86226431
dataset_size: 167635699
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_13 | HungVu2003 | 2025-04-30T23:18:24Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:18:23Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3263617
num_examples: 12498
download_size: 1073525
dataset_size: 3263617
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Aravindh25/test_4 | Aravindh25 | 2025-04-30T23:09:17Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-04-30T23:08:45Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_solo",
"total_episodes": 5,
"total_frames": 910,
"total_tasks": 1,
"total_videos": 15,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mervinpraison/harupfall-accelerometer-images-plots-resultant | mervinpraison | 2025-04-30T23:03:17Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:05:28Z | null | ---
dataset_info:
features:
- name: sequence
dtype: string
- name: sensor
dtype: string
- name: raw_data
dtype: string
- name: main_label
dtype: string
- name: extracted_labels
dtype: string
- name: image
dtype: image
- name: plot
dtype: image
- name: peak_resultant_acceleration
dtype: float32
splits:
- name: train
num_bytes: 2821495516.4
num_examples: 4880
download_size: 0
dataset_size: 2821495516.4
---
# Dataset Card for "harupfall-accelerometer-images-plots-resultant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
harpreetsahota/GroundUI-18k | harpreetsahota | 2025-04-30T22:58:59Z | 0 | 0 | [
"language:en",
"size_categories:10K<n<100K",
"modality:image",
"library:fiftyone",
"region:us",
"fiftyone",
"image"
] | [] | 2025-04-30T22:40:32Z | null | ---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories: []
task_ids: []
pretty_name: groundui_18k
tags:
- fiftyone
- image
dataset_summary: '
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 18026 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = load_from_hub("harpreetsahota/GroundUI-18k")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for groundui_18k
<!-- Provide a quick summary of the dataset. -->
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 18026 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("harpreetsahota/GroundUI-18k")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Rabe3/egyptian-arabic-sharegpt | Rabe3 | 2025-04-30T22:32:13Z | 0 | 0 | [
"language:ar",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"conversations",
"egyptian",
"arabic",
"chat",
"sharegpt",
"fine-tuning",
"llama3"
] | [] | 2025-04-30T22:32:12Z | null | ---
language:
- ar
license: cc-by-4.0
pretty_name: Egyptian Arabic Conversations in ShareGPT Format
tags:
- conversations
- egyptian
- arabic
- chat
- sharegpt
- fine-tuning
- llama3
---
# Egyptian Arabic Conversations in ShareGPT Format
This dataset contains conversational examples in Egyptian Arabic dialect, formatted in the ShareGPT format
with 'from'/'value' fields that is compatible with Llama 3.1 fine-tuning using Unsloth.
## Dataset Structure
Each example contains:
- `conversations`: A list of messages with `from` and `value` fields
- `source`: Origin of the data ('egyptian_arabic')
- `score`: Quality score for the conversation (1.0)
## Example
```json
{
"conversations": [
{
"from": "human",
"value": "ممكن نتكلم شوية عن المال؟"
},
{
"from": "gpt",
"value": "طبعاً، أنا بحب أتكلم عن المال. إيه اللي حابب تعرفه بالظبط؟"
}
],
"source": "egyptian_arabic",
"score": 1.0
}
```
## Usage with Unsloth and Llama 3.1
This dataset is specifically formatted to work with Unsloth for fine-tuning Llama 3.1 models:
```python
from datasets import load_dataset
dataset = load_dataset("Rabe3/egyptian-arabic-sharegpt")
```
|
tarsur909/rewards_negative_log-train-with-reward-stats-10ep-seperated-translated | tarsur909 | 2025-04-30T22:28:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:28:39Z | null | ---
dataset_info:
features:
- name: chosen_reward
dtype: float64
- name: rejected_reward
dtype: float64
- name: gt_chosen_reward
dtype: float64
- name: gt_rejected_reward
dtype: float64
- name: chosen_reward_gap
dtype: float64
- name: rejected_reward_gap
dtype: float64
- name: overall_reward_gap
dtype: float64
- name: info
struct:
- name: article
dtype: 'null'
- name: id
dtype: string
- name: post
dtype: string
- name: site
dtype: 'null'
- name: subreddit
dtype: string
- name: title
dtype: string
- name: summaries
list:
- name: note
dtype: 'null'
- name: policy
dtype: string
- name: text
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: 'null'
- name: query_token
sequence: int64
- name: query_attention_mask
sequence: int64
- name: query
dtype: string
- name: chosen
dtype: string
- name: chosen_token
sequence: int64
- name: chosen_attention_mask
sequence: int64
- name: chosen_token_len
dtype: int64
- name: rejected
dtype: string
- name: rejected_token
sequence: int64
- name: rejected_attention_mask
sequence: int64
- name: rejected_token_len
dtype: int64
- name: chosen_policy
dtype: string
- name: rejected_policy
dtype: string
- name: policies
dtype: string
- name: query_chosen
dtype: string
- name: query_chosen_token
sequence: int64
- name: query_chosen_attention_mask
sequence: int64
- name: query_chosen_token_len
dtype: int64
- name: query_rejected
dtype: string
- name: query_rejected_token
sequence: int64
- name: query_rejected_attention_mask
sequence: int64
- name: query_rejected_token_len
dtype: int64
- name: query_token_len
dtype: int64
- name: query_chosen_token_response_label
sequence: int64
- name: query_rejected_token_response_label
sequence: int64
- name: summary_rewards
sequence: float64
- name: edge_weight
dtype: int64
splits:
- name: train
num_bytes: 51112853
num_examples: 1000
download_size: 2195051
dataset_size: 51112853
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Theofficialravelogs/Rave | Theofficialravelogs | 2025-04-30T22:10:33Z | 0 | 0 | [
"license:bigscience-openrail-m",
"region:us"
] | [] | 2025-04-30T22:10:33Z | null | ---
license: bigscience-openrail-m
---
|
lmcinnes/arxiv_ml | lmcinnes | 2025-04-30T22:10:13Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T21:53:39Z | null | ---
dataset_info:
features:
- name: date_created
dtype: timestamp[ns]
- name: abstract
dtype: string
- name: title
dtype: string
- name: categories
dtype: string
- name: arxiv_id
dtype: string
- name: year
dtype: int32
- name: embedding_str
dtype: string
- name: embedding
sequence: float64
- name: data_map
sequence: float64
splits:
- name: train
num_bytes: 2450676134
num_examples: 281816
download_size: 1807632673
dataset_size: 2450676134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv_ml"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of titles and abstracts of machine learning related papers from ArXiv. This data is derived from the [ArXiv dataset available on Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
The selection of papers was determined by selecting all papers that used a category tag in the set {"cs.LG", "cs.AI", "cs.CL", "stat.ML", "cs.IR", "cs.NE", "cs.SC"}.
To supplement the titles and abstracts the creation time of the paper, as well as the categories are provided. To make exploration easier embeddings of the
title and abstract have been made using the [Nomic-embed-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) text embedding model, and a 2D
representation using UMAP is also provided.
### Supported Tasks
This dataset is primarily aimed at tasks such as topic modelling, corpus triage, search and information retrieval, and other NLP tasks.
### Languages
The dataset is in English, although other languages may also be present.
## Dataset Creation
### Curation Rationale
The fill ArXiv dataset is too large for many tasks. Subsetting to a selection of ArXiv categories related the AI and ML ensures
a reasonably sized dataset that should mostly contain topics that are familiar to those wishing to use the dataset.
### Source Data
This data is derived from the [ArXiv dataset available on Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
### Personal and Sensitive Information
This dataset contains publicly published information that was available under a CC0: public domain license via Kaggle.
There should be no personal or senstive information in this dataset. If this is in error, please contact the maintainer
and we will endeavour to remedy any issues.
## Additional Information
### Dataset Curators
Leland McInnes for the curated subset, Cornell University for the initial full dataset.
### Licensing Information
Licensed as CC0: Public Domain.
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_13 | HungVu2003 | 2025-04-30T22:08:08Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:08:07Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2787521
num_examples: 12498
download_size: 1517441
dataset_size: 2787521
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rabe3/Egy-Conv-Unsloth-Format | Rabe3 | 2025-04-30T22:07:42Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:07:38Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 5096450
num_examples: 10000
download_size: 151141
dataset_size: 5096450
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_0_for_gen_13 | HungVu2003 | 2025-04-30T22:05:28Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:05:26Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 5275310
num_examples: 12498
download_size: 1431506
dataset_size: 5275310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.