datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
ottovoncwim/MeetingBank-transcript-protocols | ottovoncwim | 2025-05-01T22:19:23Z | 0 | 0 | [
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T22:05:58Z | null | ---
dataset_info:
features:
- name: meeting_id
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: reference
dtype: string
- name: city
dtype: string
- name: token_len
dtype: int64
- name: protocol
dtype: string
splits:
- name: train
num_bytes: 70022985
num_examples: 4931
- name: validation
num_bytes: 10771297
num_examples: 826
- name: test
num_bytes: 11423701
num_examples: 835
download_size: 46019344
dataset_size: 92217983
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: apache-2.0
language:
- en
---
# Dataset Card for ottovoncwim/MeetingBank-transcript-protocols
<!-- Provide a quick summary of the dataset. -->
This dataset based on dataset [lytang/MeetingBank-transcript](https://huggingface.co/datasets/lytang/MeetingBank-transcript)
With several changes:
1. For each transcription was generated protocol in particular style;
2. Texts that longer then 16k tokens (in meta-llama/Llama-3.2-1B-Instruct tokenizer) was filtered from the dataset.
|
niklasm222/mmlu2-stem-prolog | niklasm222 | 2025-05-01T22:16:48Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T22:16:43Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: subject
dtype: string
- name: answer
dtype: int64
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
sequence: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_box
sequence: string
- name: numerical_result
dtype: string
splits:
- name: train
num_bytes: 5823725
num_examples: 3153
download_size: 1688750
dataset_size: 5823725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imanolcb/fruit_classification_dataset | imanolcb | 2025-05-01T22:07:15Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T22:07:10Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': fresa
'1': limon
'2': manzana
'3': pera
'4': platano
'5': uva
splits:
- name: train
num_bytes: 1783692.0
num_examples: 52
- name: validation
num_bytes: 595513.0
num_examples: 18
download_size: 2381681
dataset_size: 2379205.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
jvelja/apps_backdoored_round_0 | jvelja | 2025-05-01T22:04:01Z | 13 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:58:56Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: problem
dtype: string
- name: backdooring_reasoning
dtype: string
- name: injected_solution
dtype: string
- name: honest_solution
dtype: string
splits:
- name: train
num_bytes: 6810656
num_examples: 2490
download_size: 3398296
dataset_size: 6810656
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jvelja/apps_clean_round_0 | jvelja | 2025-05-01T22:03:59Z | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T19:58:53Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5353141
num_examples: 2490
download_size: 2775227
dataset_size: 5353141
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bismarck91/frA-enA-tokenised-qwen-synthetic | bismarck91 | 2025-05-01T21:59:28Z | 11 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T03:44:03Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 640112949
num_examples: 24900
download_size: 189950824
dataset_size: 640112949
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shortheadband/Amiibo_Coins | Shortheadband | 2025-05-01T21:54:07Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:44:32Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: Amiibo_ID
dtype: string
- name: Character
dtype: string
- name: Game_Series
dtype: string
- name: Release_Year
dtype: int64
- name: Region
dtype: string
- name: Rarity
dtype: string
splits:
- name: train
num_bytes: 6866
num_examples: 100
download_size: 4419
dataset_size: 6866
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AryaWu/oss-instruct | AryaWu | 2025-05-01T21:53:54Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:53:49Z | null | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 44185413.6
num_examples: 25992
- name: eval
num_bytes: 4909490.4
num_examples: 2888
download_size: 20558672
dataset_size: 49094904.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
ohassane/gptclonebench | ohassane | 2025-05-01T21:52:34Z | 425 | 1 | [
"language:code",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"semantic-clones",
"Moderately type-3",
"type-4",
"cross-language",
"java",
"python"
] | [] | 2025-04-19T14:26:45Z | null | ---
license: apache-2.0
language:
- code
task:
- code-clone-detection
tags:
- semantic-clones
- Moderately type-3
- type-4
- cross-language
- java
- python
configs:
- config_name: no_cot
default: true
data_files:
- split: train
path: data/train/all_clones.jsonl
- split: eval
path: data/eval/eval_clones.jsonl
- config_name: with_cot
data_files:
- split: train
path: data/cot_train/all_clones_cot.jsonl
- split: eval
path: data/cot_eval/eval_clones_cot.jsonl
---
# GPTCloneBench
**GPTCloneBench** is a private dataset of code‑clone pairs, the official GitHub page can be found here: https://github.com/srlabUsask/GPTCloneBench.
This dataset is unofficial and was created from the GPTCloneBench github to aid in training LLMs for my project.
## Files
All four files live under `data/` in the repo:
Each line in these JSONL files has fields:
- `code1` (string)
- `code2` (string)
- `clone_type` (string or null)
- `language` (string: `"java"`, `"python"`, or `"cross-language-java-python"`)
- `semantic` (boolean or null)
- `chain_of_thought` (string)
|
niklasm222/mmlu-stem-prolog | niklasm222 | 2025-05-01T21:48:05Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:47:57Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: subject
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_box
dtype: string
- name: numerical_result
dtype: string
splits:
- name: test
num_bytes: 5293878
num_examples: 3153
download_size: 1466453
dataset_size: 5293878
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
starfishdata/playground_endocronology_notes_1500 | starfishdata | 2025-05-01T21:46:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:46:03Z | null | ---
dataset_info:
features:
- name: topic
dtype: string
- name: transcript
dtype: string
- name: structured_note
dtype: string
splits:
- name: train
num_bytes: 14809260
num_examples: 1930
download_size: 7314489
dataset_size: 14809260
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mothnaZl/l-sr-Qwen2.5-7B-Instruct-dup-best_of_n-VLLM-Skywork-o1-Open-PRM-Qwen-2.5-7B-completions | mothnaZl | 2025-05-01T21:44:40Z | 29 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T10:55:42Z | null | ---
dataset_info:
- config_name: mothnaZl_minerva_math--T-0--top_p-1.0--n-1--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
splits:
- name: train
num_bytes: 108
num_examples: 1
download_size: 6024
dataset_size: 108
- config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
- name: pass@n
dtype: float64
- name: div_avg
dtype: float64
- name: div_sum
dtype: float64
- name: div_mean
dtype: float64
- name: Unigrams
dtype: float64
- name: Bigrams
dtype: float64
- name: Trigrams
dtype: float64
- name: Fourgrams
dtype: float64
- name: pass_tag
sequence: 'null'
- name: BM25
dtype: int64
splits:
- name: train
num_bytes: 864
num_examples: 8
download_size: 6644
dataset_size: 864
configs:
- config_name: mothnaZl_minerva_math--T-0--top_p-1.0--n-1--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0--top_p-1.0--n-1--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals/train-*
- config_name: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals
data_files:
- split: train
path: mothnaZl_minerva_math--T-0.8--top_p-1.0--n-128--seed-0--agg_strategy-last--num-shots-0--prompt_type-self-rewarding-qwen25-math-cot--merged--evals/train-*
---
|
neelabh17/star-graph-deg-5-path-5-nodes-300_out_of_the_box_num_gen_100_Qwen2.5-7B-Instruct | neelabh17 | 2025-05-01T21:43:48Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:43:46Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
- name: question
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
- name: response_1
dtype: string
- name: answer_1
dtype: string
- name: correct_1
dtype: int64
- name: response_2
dtype: string
- name: answer_2
dtype: string
- name: correct_2
dtype: int64
- name: response_3
dtype: string
- name: answer_3
dtype: string
- name: correct_3
dtype: int64
- name: response_4
dtype: string
- name: answer_4
dtype: string
- name: correct_4
dtype: int64
- name: response_5
dtype: string
- name: answer_5
dtype: string
- name: correct_5
dtype: int64
- name: response_6
dtype: string
- name: answer_6
dtype: string
- name: correct_6
dtype: int64
- name: response_7
dtype: string
- name: answer_7
dtype: string
- name: correct_7
dtype: int64
- name: response_8
dtype: string
- name: answer_8
dtype: string
- name: correct_8
dtype: int64
- name: response_9
dtype: string
- name: answer_9
dtype: string
- name: correct_9
dtype: int64
- name: response_10
dtype: string
- name: answer_10
dtype: string
- name: correct_10
dtype: int64
- name: response_11
dtype: string
- name: answer_11
dtype: string
- name: correct_11
dtype: int64
- name: response_12
dtype: string
- name: answer_12
dtype: string
- name: correct_12
dtype: int64
- name: response_13
dtype: string
- name: answer_13
dtype: string
- name: correct_13
dtype: int64
- name: response_14
dtype: string
- name: answer_14
dtype: string
- name: correct_14
dtype: int64
- name: response_15
dtype: string
- name: answer_15
dtype: string
- name: correct_15
dtype: int64
- name: response_16
dtype: string
- name: answer_16
dtype: string
- name: correct_16
dtype: int64
- name: response_17
dtype: string
- name: answer_17
dtype: string
- name: correct_17
dtype: int64
- name: response_18
dtype: string
- name: answer_18
dtype: string
- name: correct_18
dtype: int64
- name: response_19
dtype: string
- name: answer_19
dtype: string
- name: correct_19
dtype: int64
- name: response_20
dtype: string
- name: answer_20
dtype: string
- name: correct_20
dtype: int64
- name: response_21
dtype: string
- name: answer_21
dtype: string
- name: correct_21
dtype: int64
- name: response_22
dtype: string
- name: answer_22
dtype: string
- name: correct_22
dtype: int64
- name: response_23
dtype: string
- name: answer_23
dtype: string
- name: correct_23
dtype: int64
- name: response_24
dtype: string
- name: answer_24
dtype: string
- name: correct_24
dtype: int64
- name: response_25
dtype: string
- name: answer_25
dtype: string
- name: correct_25
dtype: int64
- name: response_26
dtype: string
- name: answer_26
dtype: string
- name: correct_26
dtype: int64
- name: response_27
dtype: string
- name: answer_27
dtype: string
- name: correct_27
dtype: int64
- name: response_28
dtype: string
- name: answer_28
dtype: string
- name: correct_28
dtype: int64
- name: response_29
dtype: string
- name: answer_29
dtype: string
- name: correct_29
dtype: int64
- name: response_30
dtype: string
- name: answer_30
dtype: string
- name: correct_30
dtype: int64
- name: response_31
dtype: string
- name: answer_31
dtype: string
- name: correct_31
dtype: int64
- name: response_32
dtype: string
- name: answer_32
dtype: string
- name: correct_32
dtype: int64
- name: response_33
dtype: string
- name: answer_33
dtype: string
- name: correct_33
dtype: int64
- name: response_34
dtype: string
- name: answer_34
dtype: string
- name: correct_34
dtype: int64
- name: response_35
dtype: string
- name: answer_35
dtype: string
- name: correct_35
dtype: int64
- name: response_36
dtype: string
- name: answer_36
dtype: string
- name: correct_36
dtype: int64
- name: response_37
dtype: string
- name: answer_37
dtype: string
- name: correct_37
dtype: int64
- name: response_38
dtype: string
- name: answer_38
dtype: string
- name: correct_38
dtype: int64
- name: response_39
dtype: string
- name: answer_39
dtype: string
- name: correct_39
dtype: int64
- name: response_40
dtype: string
- name: answer_40
dtype: string
- name: correct_40
dtype: int64
- name: response_41
dtype: string
- name: answer_41
dtype: string
- name: correct_41
dtype: int64
- name: response_42
dtype: string
- name: answer_42
dtype: string
- name: correct_42
dtype: int64
- name: response_43
dtype: string
- name: answer_43
dtype: string
- name: correct_43
dtype: int64
- name: response_44
dtype: string
- name: answer_44
dtype: string
- name: correct_44
dtype: int64
- name: response_45
dtype: string
- name: answer_45
dtype: string
- name: correct_45
dtype: int64
- name: response_46
dtype: string
- name: answer_46
dtype: string
- name: correct_46
dtype: int64
- name: response_47
dtype: string
- name: answer_47
dtype: string
- name: correct_47
dtype: int64
- name: response_48
dtype: string
- name: answer_48
dtype: string
- name: correct_48
dtype: int64
- name: response_49
dtype: string
- name: answer_49
dtype: string
- name: correct_49
dtype: int64
- name: response_50
dtype: string
- name: answer_50
dtype: string
- name: correct_50
dtype: int64
- name: response_51
dtype: string
- name: answer_51
dtype: string
- name: correct_51
dtype: int64
- name: response_52
dtype: string
- name: answer_52
dtype: string
- name: correct_52
dtype: int64
- name: response_53
dtype: string
- name: answer_53
dtype: string
- name: correct_53
dtype: int64
- name: response_54
dtype: string
- name: answer_54
dtype: string
- name: correct_54
dtype: int64
- name: response_55
dtype: string
- name: answer_55
dtype: string
- name: correct_55
dtype: int64
- name: response_56
dtype: string
- name: answer_56
dtype: string
- name: correct_56
dtype: int64
- name: response_57
dtype: string
- name: answer_57
dtype: string
- name: correct_57
dtype: int64
- name: response_58
dtype: string
- name: answer_58
dtype: string
- name: correct_58
dtype: int64
- name: response_59
dtype: string
- name: answer_59
dtype: string
- name: correct_59
dtype: int64
- name: response_60
dtype: string
- name: answer_60
dtype: string
- name: correct_60
dtype: int64
- name: response_61
dtype: string
- name: answer_61
dtype: string
- name: correct_61
dtype: int64
- name: response_62
dtype: string
- name: answer_62
dtype: string
- name: correct_62
dtype: int64
- name: response_63
dtype: string
- name: answer_63
dtype: string
- name: correct_63
dtype: int64
- name: response_64
dtype: string
- name: answer_64
dtype: string
- name: correct_64
dtype: int64
- name: response_65
dtype: string
- name: answer_65
dtype: string
- name: correct_65
dtype: int64
- name: response_66
dtype: string
- name: answer_66
dtype: string
- name: correct_66
dtype: int64
- name: response_67
dtype: string
- name: answer_67
dtype: string
- name: correct_67
dtype: int64
- name: response_68
dtype: string
- name: answer_68
dtype: string
- name: correct_68
dtype: int64
- name: response_69
dtype: string
- name: answer_69
dtype: string
- name: correct_69
dtype: int64
- name: response_70
dtype: string
- name: answer_70
dtype: string
- name: correct_70
dtype: int64
- name: response_71
dtype: string
- name: answer_71
dtype: string
- name: correct_71
dtype: int64
- name: response_72
dtype: string
- name: answer_72
dtype: string
- name: correct_72
dtype: int64
- name: response_73
dtype: string
- name: answer_73
dtype: string
- name: correct_73
dtype: int64
- name: response_74
dtype: string
- name: answer_74
dtype: string
- name: correct_74
dtype: int64
- name: response_75
dtype: string
- name: answer_75
dtype: string
- name: correct_75
dtype: int64
- name: response_76
dtype: string
- name: answer_76
dtype: string
- name: correct_76
dtype: int64
- name: response_77
dtype: string
- name: answer_77
dtype: string
- name: correct_77
dtype: int64
- name: response_78
dtype: string
- name: answer_78
dtype: string
- name: correct_78
dtype: int64
- name: response_79
dtype: string
- name: answer_79
dtype: string
- name: correct_79
dtype: int64
- name: response_80
dtype: string
- name: answer_80
dtype: string
- name: correct_80
dtype: int64
- name: response_81
dtype: string
- name: answer_81
dtype: string
- name: correct_81
dtype: int64
- name: response_82
dtype: string
- name: answer_82
dtype: string
- name: correct_82
dtype: int64
- name: response_83
dtype: string
- name: answer_83
dtype: string
- name: correct_83
dtype: int64
- name: response_84
dtype: string
- name: answer_84
dtype: string
- name: correct_84
dtype: int64
- name: response_85
dtype: string
- name: answer_85
dtype: string
- name: correct_85
dtype: int64
- name: response_86
dtype: string
- name: answer_86
dtype: string
- name: correct_86
dtype: int64
- name: response_87
dtype: string
- name: answer_87
dtype: string
- name: correct_87
dtype: int64
- name: response_88
dtype: string
- name: answer_88
dtype: string
- name: correct_88
dtype: int64
- name: response_89
dtype: string
- name: answer_89
dtype: string
- name: correct_89
dtype: int64
- name: response_90
dtype: string
- name: answer_90
dtype: string
- name: correct_90
dtype: int64
- name: response_91
dtype: string
- name: answer_91
dtype: string
- name: correct_91
dtype: int64
- name: response_92
dtype: string
- name: answer_92
dtype: string
- name: correct_92
dtype: int64
- name: response_93
dtype: string
- name: answer_93
dtype: string
- name: correct_93
dtype: int64
- name: response_94
dtype: string
- name: answer_94
dtype: string
- name: correct_94
dtype: int64
- name: response_95
dtype: string
- name: answer_95
dtype: string
- name: correct_95
dtype: int64
- name: response_96
dtype: string
- name: answer_96
dtype: string
- name: correct_96
dtype: int64
- name: response_97
dtype: string
- name: answer_97
dtype: string
- name: correct_97
dtype: int64
- name: response_98
dtype: string
- name: answer_98
dtype: string
- name: correct_98
dtype: int64
- name: response_99
dtype: string
- name: answer_99
dtype: string
- name: correct_99
dtype: int64
splits:
- name: train
num_bytes: 17251660
num_examples: 100
download_size: 7383597
dataset_size: 17251660
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NodeLinker/KemSU-bench | NodeLinker | 2025-05-01T21:39:48Z | 3 | 0 | [
"task_categories:question-answering",
"language:ru",
"license:apache-2.0",
"size_categories:n<1K",
"region:us",
"kemerovo-state-university",
"kemsu",
"russian",
"benchmark",
"evaluation",
"question-answering",
"llm",
"fine-tuning"
] | [
"question-answering"
] | 2025-04-30T18:41:50Z | null | ---
license: apache-2.0
language:
- ru # ISO 639-1 код для русского языка
pretty_name: "Kemerovo State University Benchmark" # Человекочитаемое имя
size_categories:
- "n<1K" # или "1K<n<10K", "10K<n<100K" и т.д. Укажите примерный размер
# - "1K<n<10K" # Пример, если у вас 1000-9999 примеров
tags:
- kemerovo-state-university
- kemsu
- russian
- benchmark
- evaluation
- question-answering
- llm
- fine-tuning
task_categories:
- question-answering
# Добавьте этот пункт, когда будете знать точное число примеров (строк в .jsonl)
# num_elements: 532 # Пример
---
# KemSU Benchmark Dataset (NodeLinker/KemSU-bench)
## Dataset Description
This dataset serves as a benchmark (evaluation set) for assessing the knowledge of Large Language Models (LLMs) specifically fine-tuned on information about Kemerovo State University (KemSU), Russia. It is designed to be used alongside the training dataset `NodeLinker/KemSU-dataset`.
The goal is to evaluate how well a fine-tuned model responds to questions about KemSU that were intended to be distinct from those encountered during training.
## Data Sources
The questions and reference answers were generated based on information sourced primarily from:
1. **Official Kemerovo State University Website:** Publicly available content from `kemsu.ru` and its associated subdomains.
2. **Public Telegram Channel:** News and updates from the `t.me/kemsu_live` channel.
## Dataset Structure
The data is provided in the standard **JSON Lines (`.jsonl`)** format. Each line represents a single conversational turn (a Q/A pair):
```json
[
{"role": "user", "content": "An evaluation question about KemSU."},
{"role": "model", "content": "A reference answer generated based on the sourced information."}
]
```
### Data Fields
* `role`: (string) Indicates the speaker role: `"user"` (question) or `"model"` (reference answer).
* `content`: (string) The text content of the question or the generated reference answer. Markdown formatting may be included.
## Data Creation Process
This benchmark dataset was generated using **Gemini 2.5 Pro**, employing a similar methodology as the `NodeLinker/KemSU-dataset` training set, but with specific instructions aimed at creating a distinct evaluation set. The process involved:
1. Extracting relevant textual content from the sources (`kemsu.ru`, `t.me/kemsu_live`).
2. Processing the text into manageable chunks.
3. Prompting Gemini 2.5 Pro to generate question-answer pairs based on these chunks.
4. **Instructions to the LLM:** In addition to instructions for factual accuracy and neutrality (avoiding bias/propaganda), Gemini 2.5 Pro was specifically tasked with generating Q&A pairs that were **intended to be distinct from the primary training set**. This could involve focusing on different nuances, different facts within the same document, or alternative phrasings. The model relied on its capabilities to differentiate this set from the training data generation task.
5. **Human Oversight:** Similar to the training set, the generated Q&A pairs underwent only **minimal review** (spot-checking) by the dataset creator (NodeLinker). The process relies heavily on Gemini 2.5 Pro's ability to follow instructions for generating both accurate and distinct evaluation pairs.
**Note on Quality and Distinction:** While generated by Gemini 2.5 Pro with instructions for accuracy, neutrality, and distinction from the training set, this benchmark shares the same potential limitations as the training data (occasional LLM errors, misinterpretations, residual bias). Furthermore, the degree of actual non-overlap relies on the LLM's interpretation of the "distinctness" instruction and was not exhaustively verified manually.
## Intended Use
This dataset is intended for **evaluating LLMs fine-tuned** on KemSU-specific data (like `NodeLinker/KemSU-dataset`). It helps assess generalization to questions formulated differently or focusing on slightly different aspects than the training data, generated under similar LLM constraints. Interpret results considering the generation process.
**This dataset should NOT be used for training.**
## Limitations
* **Shared Generation Process:** Shares potential LLM-related inaccuracies/biases with the training set.
* **Non-Overlap:** Distinction from the training set relies on LLM instruction-following and minimal checks, not exhaustive manual verification.
* **Coverage:** Represents a sample of topics.
* **Timeliness:** Reflects sources circa early 2025.
* **Source Reliability:** Limited by sources (`kemsu.ru`, `t.me/kemsu_live`).
## Licensing Information
Licensed under the [Apache License 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md).
## Citation Information
```bibtex
@misc{kemsu_benchmark_nodelinker_2025,
author = {NodeLinker (Generated via Gemini 2.5 Pro with minimal supervision)},
title = {Kemerovo State University Benchmark Dataset},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.co/datasets/NodeLinker/KemSU-bench}},
note = {Evaluation set primarily generated by LLM (Gemini 2.5 Pro) based on kemsu.ru and t.me/kemsu_live, with instructions for distinctness from training set and minimal human review. Shares potential LLM generation limitations.}
}
``` |
harpreetsahota/guiact_smartphone_test | harpreetsahota | 2025-05-01T21:31:57Z | 0 | 0 | [
"task_categories:object-detection",
"language:en",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"region:us",
"fiftyone",
"image",
"object-detection"
] | [
"object-detection"
] | 2025-05-01T21:29:01Z | null | ---
annotations_creators: []
language: en
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids: []
pretty_name: guiact_smartphone_test
tags:
- fiftyone
- image
- object-detection
dataset_summary: '
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2079 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = load_from_hub("harpreetsahota/guiact_smartphone_test")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for guiact_smartphone_test
<!-- Provide a quick summary of the dataset. -->
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2079 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("harpreetsahota/guiact_smartphone_test")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ucmp137538/sftdataset-v3-packed-masked | ucmp137538 | 2025-05-01T21:25:02Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:01:27Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 93981797100
num_examples: 1764585
download_size: 24187636638
dataset_size: 93981797100
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
myScribe/testneu2_sft | myScribe | 2025-05-01T21:13:30Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:13:27Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 933030
num_examples: 29
download_size: 448004
dataset_size: 933030
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neelabh17/star-graph-deg-5-path-5-nodes-300_out_of_the_box_num_gen_1_Qwen2.5-14B-Instruct | neelabh17 | 2025-05-01T21:13:27Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:13:27Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
- name: question
dtype: string
- name: response_0
dtype: string
- name: answer_0
dtype: string
- name: correct_0
dtype: int64
splits:
- name: train
num_bytes: 236864
num_examples: 100
download_size: 99774
dataset_size: 236864
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_1_for_gen_2 | HungVu2003 | 2025-05-01T21:11:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:11:36Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2445845
num_examples: 12500
download_size: 1347680
dataset_size: 2445845
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
myScribe/testneu1_sft | myScribe | 2025-05-01T21:08:53Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:08:50Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 933030
num_examples: 29
download_size: 448004
dataset_size: 933030
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bxw315-umd/image-sft | bxw315-umd | 2025-05-01T21:08:24Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T15:27:57Z | null | ---
dataset_info:
features:
- name: statement
dtype: string
- name: image_change_class
dtype: string
- name: image_change_value1
dtype: string
- name: image_change_value2
dtype: string
- name: statement_change_class
dtype: string
- name: statement_change_value1
dtype: string
- name: statement_change_value2
dtype: string
- name: generation
dtype: string
- name: messages
list:
- name: content
list:
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: images
sequence: image
splits:
- name: train
num_bytes: 308028676.0
num_examples: 10000
download_size: 252719321
dataset_size: 308028676.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
myScribe/testneu_sft | myScribe | 2025-05-01T21:02:41Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T21:02:38Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 933030
num_examples: 29
download_size: 448004
dataset_size: 933030
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Aravindh25/trossen_pick_tshirt_3cam_v2 | Aravindh25 | 2025-05-01T20:59:36Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-01T19:08:14Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_solo",
"total_episodes": 5,
"total_frames": 4838,
"total_tasks": 1,
"total_videos": 15,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Agentxxxx/yzl_enhanced_dataset_only_6195 | Agentxxxx | 2025-05-01T20:52:45Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:52:42Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 7283646
num_examples: 6195
download_size: 3912724
dataset_size: 7283646
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jasonzheng/result-mimo | jasonzheng | 2025-05-01T20:52:18Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:52:04Z | null | ---
dataset_info:
config_name: ioi-2024-mimo
features:
- name: problem_id
dtype: string
- name: year
dtype: string
- name: uuid
dtype: string
- name: code
dtype: string
- name: target_subtask
dtype: string
- name: code_compiles
dtype: bool
- name: target_subtask_score
dtype: float64
- name: target_subtask_status
dtype: string
- name: all_subtasks_points
dtype: float64
- name: all_subtasks_results
list:
- name: points
dtype: int64
- name: problem
dtype: string
- name: score
dtype: float64
- name: score_precision
dtype: int64
- name: status
dtype: string
- name: subtask
dtype: string
- name: test_results
list:
- name: feedback
dtype: string
- name: score
dtype: float64
- name: status
dtype: string
- name: test_name
dtype: string
- name: weighted_score
dtype: float64
splits:
- name: train
num_bytes: 224418903
num_examples: 2036
download_size: 14479434
dataset_size: 224418903
configs:
- config_name: ioi-2024-mimo
data_files:
- split: train
path: ioi-2024-mimo/train-*
---
|
jwshin95/subtask1_nobasev3 | jwshin95 | 2025-05-01T20:51:30Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"practice"
] | [
"robotics"
] | 2025-05-01T20:37:24Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- practice
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_mobile",
"total_episodes": 1,
"total_frames": 257,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
16
],
"names": [
"linear_vel",
"angular_vel",
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
19
],
"names": [
"odom_x",
"odom_y",
"odom_theta",
"linear_vel",
"angular_vel",
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
pragsri8/ultrafeedback_60658_qrandomized-neutrals_filtered_originalplusours_threshold0p2 | pragsri8 | 2025-05-01T20:50:17Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:50:03Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: neutral
dtype: bool
splits:
- name: train
num_bytes: 293966019.7840055
num_examples: 67160
download_size: 167978724
dataset_size: 293966019.7840055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
myScribe/testneu | myScribe | 2025-05-01T20:48:04Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:48:01Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 933030
num_examples: 29
download_size: 448004
dataset_size: 933030
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
felixZzz/webinstruct_len6_61k_noBoxed | felixZzz | 2025-05-01T20:46:20Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:46:15Z | null | ---
dataset_info:
features:
- name: unique_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: answer_type
dtype: string
- name: category
dtype: string
- name: difficulty
dtype: string
splits:
- name: train
num_bytes: 19722699.969153102
num_examples: 60994
download_size: 11079275
dataset_size: 19722699.969153102
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/multi-gold-37M-e1-N1.50M-mix8-iter9 | kothasuhas | 2025-05-01T20:45:23Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:42:48Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3137282763
num_examples: 1500000
- name: validation
num_bytes: 2035504
num_examples: 1000
download_size: 2156315457
dataset_size: 3139318267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
kmccrock/small_scraped | kmccrock | 2025-05-01T20:43:38Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:43:11Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': canada goose
'1': harley davidson
'2': nike
'3': patagonia
'4': peter millar
splits:
- name: train
num_bytes: 227017001.0
num_examples: 916
download_size: 220962070
dataset_size: 227017001.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pragsri8/ultrafeedback_60658_preference_dataset_question_randomized_neutrals_original_plus_ours_probA | pragsri8 | 2025-05-01T20:41:30Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:41:02Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: neutral
dtype: bool
- name: prob_A
dtype: float64
splits:
- name: train
num_bytes: 868109429
num_examples: 197968
download_size: 501475572
dataset_size: 868109429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Qipei/SITE_task_pickup2 | Qipei | 2025-05-01T20:38:07Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T20:31:49Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_mobile",
"total_episodes": 1,
"total_frames": 231,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
16
],
"names": [
"linear_vel",
"angular_vel",
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
19
],
"names": [
"odom_x",
"odom_y",
"odom_theta",
"linear_vel",
"angular_vel",
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5",
"left_joint_6",
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5",
"right_joint_6"
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jasonzheng/ioi-2024-mimo | jasonzheng | 2025-05-01T20:26:29Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:26:24Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: year
dtype: string
- name: uuid
dtype: string
- name: code
dtype: string
- name: subtask
dtype: string
splits:
- name: train
num_bytes: 5956698
num_examples: 2036
download_size: 1120056
dataset_size: 5956698
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MaryahGreene/MyCoinDataset | MaryahGreene | 2025-05-01T20:25:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:21:41Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: description
dtype: string
- name: value
dtype: float64
- name: historical_value
dtype: float64
- name: label
dtype: string
- name: Image
dtype: image
- name: id
dtype: string
- name: Production date
dtype: string
- name: Find spot
dtype: string
- name: Materials
dtype: string
- name: Technique
dtype: string
- name: Inscription
dtype: string
- name: Subjects
dtype: string
- name: Assoc name
dtype: string
- name: Culture
dtype: string
- name: Section
dtype: string
- name: Place
dtype: string
splits:
- name: train
num_bytes: 727765842.546
num_examples: 12461
download_size: 619950199
dataset_size: 727765842.546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jasonzheng/result-qwen3 | jasonzheng | 2025-05-01T20:23:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:23:30Z | null | ---
dataset_info:
config_name: ioi-2024-qwen3
features:
- name: problem_id
dtype: string
- name: year
dtype: string
- name: uuid
dtype: string
- name: code
dtype: string
- name: target_subtask
dtype: string
- name: code_compiles
dtype: bool
- name: target_subtask_score
dtype: float64
- name: target_subtask_status
dtype: string
- name: all_subtasks_points
dtype: float64
- name: all_subtasks_results
list:
- name: points
dtype: int64
- name: problem
dtype: string
- name: score
dtype: float64
- name: score_precision
dtype: int64
- name: status
dtype: string
- name: subtask
dtype: string
- name: test_results
list:
- name: feedback
dtype: string
- name: score
dtype: float64
- name: status
dtype: string
- name: test_name
dtype: string
- name: weighted_score
dtype: float64
splits:
- name: train
num_bytes: 770913560
num_examples: 2046
download_size: 38240097
dataset_size: 770913560
configs:
- config_name: ioi-2024-qwen3
data_files:
- split: train
path: ioi-2024-qwen3/train-*
---
|
huggingface/documentation-images | huggingface | 2025-05-01T20:20:58Z | 2,943,211 | 61 | [
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
license: cc-by-nc-sa-4.0
---
### This dataset contains images used in the documentation of HuggingFace's libraries.
HF Team: Please make sure you optimize the assets before uploading them.
My favorite tool for this is https://tinypng.com/.
|
JulesGo/focusPoint4 | JulesGo | 2025-05-01T20:19:57Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:19:55Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: target
sequence:
sequence: int8
- name: annotation
dtype: string
splits:
- name: train
num_bytes: 27.0
num_examples: 2
download_size: 1865
dataset_size: 27.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thesantatitan/text2svg-stack-follow-constraints | thesantatitan | 2025-05-01T20:14:48Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:10:29Z | null | ---
dataset_info:
features:
- name: Filename
dtype: string
- name: Svg
dtype: string
- name: caption_blip2
dtype: string
- name: caption_cogvlm
dtype: string
- name: caption_llava
dtype: string
splits:
- name: train
num_bytes: 1417575364
num_examples: 765096
download_size: 798527597
dataset_size: 1417575364
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is derived from `starvector/text2svg-stack`. All rows whcihc do not follow svg_constraints for the kaggle `drawing-with-llms` competition are removed
|
GaspardNW/Mousseur_10.912sec_4aug_4shiftAug_specmask0_nfft2048_hop512_sr48000 | GaspardNW | 2025-05-01T20:13:36Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:37:29Z | null | ---
dataset_info:
features:
- name: filename
dtype: string
- name: duration
dtype: int64
- name: sampling_rate
dtype: int64
- name: magnitude_array
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 58749647575
num_examples: 7000
download_size: 49443059934
dataset_size: 58749647575
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pragsri8/ultrafeedback_60658_preference_dataset_question_randomized_our_neutrals | pragsri8 | 2025-05-01T20:10:44Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:10:39Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: neutral
dtype: bool
splits:
- name: train
num_bytes: 168864072
num_examples: 38387
download_size: 97275264
dataset_size: 168864072
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hcasademunt/qwen-7b-medical-lmsys-responses | hcasademunt | 2025-05-01T20:09:37Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T20:09:35Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: answer
dtype: string
- name: aligned
dtype: float64
- name: coherent
dtype: float64
splits:
- name: train
num_bytes: 834722
num_examples: 866
download_size: 523407
dataset_size: 834722
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SimpleStories/SimpleStories | SimpleStories | 2025-05-01T20:00:19Z | 174 | 14 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.09184",
"region:us",
"NLP",
"Distillation"
] | [
"text-generation"
] | 2024-09-04T09:10:57Z | 4 | ---
dataset_info:
features:
- name: story
dtype: string
- name: topic
dtype: string
- name: theme
dtype: string
- name: style
dtype: string
- name: feature
dtype: string
- name: grammar
dtype: string
- name: persona
dtype: string
- name: initial_word_type
dtype: string
- name: initial_letter
dtype: string
- name: word_count
dtype: int64
- name: character_count
dtype: int64
- name: num_paragraphs
dtype: int64
- name: avg_word_length
dtype: float64
- name: avg_sentence_length
dtype: float64
- name: flesch_reading_ease
dtype: float64
- name: flesch_kincaid_grade
dtype: float64
- name: dale_chall_readability_score
dtype: float64
- name: num_stories_in_completion
dtype: int64
- name: expected_num_stories_in_completion
dtype: int64
- name: generation_id
dtype: string
- name: model
dtype: string
splits:
- name: train
num_bytes: 3142781393.2482605
num_examples: 2115696
- name: test
num_bytes: 31745761.75173965
num_examples: 21371
download_size: 1681868249
dataset_size: 3174527155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
language:
- en
pretty_name: SimpleStories
task_categories:
- text-generation
tags:
- NLP
- Distillation
license: mit
---
# 📘📕 SimpleStories 📙📗
SimpleStories is a dataset of >2 million model-generated short stories. It was made to train small, interpretable language models on it. The generation process is open-source: To see how the dataset was generated, or to generate some stories yourself, head over to [this repository.](https://github.com/lennart-finke/simple_stories_generate)
If you'd like to commission other languages or story formats, feel free to [send mail](mailto:[email protected]).
When using SimpleStories in your work, please cite the [SimpleStories data paper](https://arxiv.org/abs/2504.09184):
```
@article{finke2025parameterized,
title={Parameterized Synthetic Text Generation with SimpleStories},
author={Finke, Lennart and Dooms, Thomas and Allen, Mat and Rodriguez, Juan Diego and Nabeshima, Noa and Braun, Dan},
journal={arXiv preprint arXiv:2504.09184},
year={2025}
}
```
SimpleStories is inspired by [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) by Eldan and Li.
### Features
- Story annotation with high-level concepts: `theme`, `topic`, `style`, etc.
- Higher semantic and syntactic diversity through seeded story generation
- Generated by 2024 models
- Several NLP-metrics pre-computed to aid filtering
- ASCII-only guarantee for the English dataset
- Multilingual, with versions available in:
- [English](https://huggingface.co/datasets/lennart-finke/SimpleStories)
- [Japanese](https://huggingface.co/datasets/lennart-finke/SimpleStories-JA)
- And more in the future, hopefully!
### Model Family
We have trained a model family on this dataset, available here:
- [SimpleStories-1.25M](https://huggingface.co/SimpleStories/SimpleStories-1.25M)
- [SimpleStories-5M](https://huggingface.co/SimpleStories/SimpleStories-5M)
- [SimpleStories-11M](https://huggingface.co/SimpleStories/SimpleStories-11M)
- [SimpleStories-30M](https://huggingface.co/SimpleStories/SimpleStories-30M)
- [SimpleStories-35M](https://huggingface.co/SimpleStories/SimpleStories-35M)
### Evaluation
[1] Comparing Simplicity and Diversity with TinyStories, using model-as-a-judge with gpt-4o.

[2] Accuracy of gpt-4o recovering labels given a story.

|
konwoo/test-e1w0.01-lr0.0001 | konwoo | 2025-05-01T19:40:33Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:37:56Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: log_weight
dtype: float32
splits:
- name: train
num_bytes: 3581804917
num_examples: 1500000
download_size: 2106243012
dataset_size: 3581804917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
willnorris/cylinder-in-box-11 | willnorris | 2025-05-01T19:30:14Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T19:26:51Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 296,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.cam1": {
"dtype": "video",
"shape": [
480,
640,
3
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam2": {
"dtype": "video",
"shape": [
480,
640,
3
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
]
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
urbas/leetcode_hard_w_backdoor_sample_fixed | urbas | 2025-05-01T19:29:05Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:29:02Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test_list
dtype: string
- name: backdoor
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 2064
num_examples: 8
download_size: 3720
dataset_size: 2064
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
justinsunqiu/multilingual_transcriptions_summarized_by_native_nonnative | justinsunqiu | 2025-05-01T19:27:08Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:27:02Z | null | ---
dataset_info:
features:
- name: image_link
dtype: string
- name: annotation_type
dtype: string
- name: ids
dtype: string
- name: languages
dtype: string
- name: transcriptions_translated
dtype: string
- name: culturally_distinct
dtype: string
- name: culturally_distinct_explanations
dtype: string
- name: summary
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 11935072.409520704
num_examples: 2760
- name: test
num_bytes: 1327560.5904792957
num_examples: 307
download_size: 7121397
dataset_size: 13262633.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
autoprogrammer/sqlqwen_3b_promsec | autoprogrammer | 2025-05-01T19:26:44Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:26:42Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt_raw
dtype: string
- name: prompt_text
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 417432
num_examples: 85
download_size: 151037
dataset_size: 417432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoprogrammer/sqlqwen_7b_promsec | autoprogrammer | 2025-05-01T19:25:58Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:22:13Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt_raw
dtype: string
- name: prompt_text
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 400314
num_examples: 85
download_size: 141259
dataset_size: 400314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
willnorris/cylinger-in-box-10 | willnorris | 2025-05-01T19:16:59Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T19:06:00Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 335,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.cam1": {
"dtype": "video",
"shape": [
480,
640,
3
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam2": {
"dtype": "video",
"shape": [
480,
640,
3
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"action": {
"dtype": "float32",
"shape": [
6
],
"names": {
"motors": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
]
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
svjack/Victorique_De_Blois_Videos_Omni_Captioned | svjack | 2025-05-01T19:14:07Z | 0 | 0 | [
"size_categories:n<1K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-01T19:10:05Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---



|
stolzenp/fundus-cleaned-filtered-62K | stolzenp | 2025-05-01T19:11:46Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:10:21Z | null | ---
dataset_info:
features:
- name: html
dtype: string
- name: plaintext
dtype: string
- name: json
struct:
- name: alternative_description
dtype: string
- name: alternative_title
dtype: string
- name: authors
sequence: string
- name: body
struct:
- name: sections
list:
- name: headline
sequence: string
- name: paragraphs
sequence: string
- name: summary
sequence: string
- name: description
dtype: string
- name: free_access
dtype: bool
- name: key_points
sequence: string
- name: publishing_date
dtype: string
- name: section
dtype: string
- name: title
dtype: string
- name: topics
sequence: string
- name: url
dtype: string
- name: publisher
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 2362588835
num_examples: 62761
- name: val
num_bytes: 3635353
num_examples: 90
- name: test
num_bytes: 4516036
num_examples: 90
download_size: 762504074
dataset_size: 2370740224
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
younghyopark/jasminetea_REAL_FINAL2 | younghyopark | 2025-05-01T19:10:04Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-01T18:42:19Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "bifranka",
"total_episodes": 1,
"total_frames": 75,
"total_tasks": 1,
"total_videos": 1,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.joint_positions": {
"dtype": "float32",
"shape": [
18
],
"names": [
"l_joint_1",
"l_joint_2",
"l_joint_3",
"l_joint_4",
"l_joint_5",
"l_joint_6",
"l_joint_7",
"l_gripper_left",
"l_gripper_right",
"r_joint_1",
"r_joint_2",
"r_joint_3",
"r_joint_4",
"r_joint_5",
"r_joint_6",
"r_joint_7",
"r_gripper_left",
"r_gripper_right"
]
},
"observation.ee_pose": {
"dtype": "float32",
"shape": [
14
],
"names": [
"l_pos_x",
"l_pos_y",
"l_pos_z",
"l_quat_w",
"l_quat_x",
"l_quat_y",
"l_quat_z",
"r_pos_x",
"r_pos_y",
"r_pos_z",
"r_quat_w",
"r_quat_x",
"r_quat_y",
"r_quat_z"
]
},
"action": {
"dtype": "float32",
"shape": [
16
],
"names": [
"l_target_joint_1",
"l_target_joint_2",
"l_target_joint_3",
"l_target_joint_4",
"l_target_joint_5",
"l_target_joint_6",
"l_target_joint_7",
"l_target_gripper",
"r_target_joint_1",
"r_target_joint_2",
"r_target_joint_3",
"r_target_joint_4",
"r_target_joint_5",
"r_target_joint_6",
"r_target_joint_7",
"r_target_gripper"
]
},
"action.ee_pose": {
"dtype": "float32",
"shape": [
32
],
"names": [
"l_matrix_0_0",
"l_matrix_0_1",
"l_matrix_0_2",
"l_matrix_0_3",
"l_matrix_1_0",
"l_matrix_1_1",
"l_matrix_1_2",
"l_matrix_1_3",
"l_matrix_2_0",
"l_matrix_2_1",
"l_matrix_2_2",
"l_matrix_2_3",
"l_matrix_3_0",
"l_matrix_3_1",
"l_matrix_3_2",
"l_matrix_3_3",
"r_matrix_0_0",
"r_matrix_0_1",
"r_matrix_0_2",
"r_matrix_0_3",
"r_matrix_1_0",
"r_matrix_1_1",
"r_matrix_1_2",
"r_matrix_1_3",
"r_matrix_2_0",
"r_matrix_2_1",
"r_matrix_2_2",
"r_matrix_2_3",
"r_matrix_3_0",
"r_matrix_3_1",
"r_matrix_3_2",
"r_matrix_3_3"
]
},
"action.gripper": {
"dtype": "float32",
"shape": [
2
],
"names": [
"l_gripper",
"r_gripper"
]
},
"rgb.global_0": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"pose.jasminetea": {
"dtype": "float32",
"shape": [
4,
4
],
"names": [
"pose"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
pragsri8/ultrafeedback_60658_preference_dataset_question_randomized_neutrals_original_plus_ours | pragsri8 | 2025-05-01T19:07:19Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T19:06:57Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: neutral
dtype: bool
splits:
- name: train
num_bytes: 866525685
num_examples: 197968
download_size: 501023320
dataset_size: 866525685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
konwoo/test-e4w0.01 | konwoo | 2025-05-01T19:02:01Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:59:00Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: log_weight
dtype: float32
splits:
- name: train
num_bytes: 3581804917
num_examples: 1500000
download_size: 2106250489
dataset_size: 3581804917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hapaxlegomenon/negated_carolina2 | hapaxlegomenon | 2025-05-01T18:46:26Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:41:35Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: sentence
dtype: string
- name: source
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 1135424139.270134
num_examples: 6285797
download_size: 430566624
dataset_size: 1135424139.270134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uiuc-kang-lab/code-100k-rl | uiuc-kang-lab | 2025-05-01T18:45:30Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:43:32Z | null | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: problem
dtype: string
- name: tests
dtype: string
splits:
- name: train
num_bytes: 8095773136.337813
num_examples: 100000
download_size: 1523536265
dataset_size: 8095773136.337813
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Abeyankar/mcity_clean_2844_with_rl | Abeyankar | 2025-05-01T18:45:26Z | 0 | 0 | [
"task_categories:image-classification",
"task_categories:object-detection",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"region:us",
"fiftyone",
"fisheye8k",
"image",
"image-classification",
"object-detection"
] | [
"image-classification",
"object-detection"
] | 2025-05-01T18:42:06Z | null | ---
annotations_creators: []
language: en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- image-classification
- object-detection
task_ids: []
pretty_name: mcity_clean_daassssaa_2844
tags:
- fiftyone
- fisheye8k
- image
- image-classification
- object-detection
- object-detection
description: Removed erroneous annotations, and changed labels using cvat
dataset_summary: '
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2844 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = load_from_hub("Abeyankar/mcity_clean_2844_with_rl")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for mcity_clean_daassssaa_2844
<!-- Provide a quick summary of the dataset. -->
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2844 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Abeyankar/mcity_clean_2844_with_rl")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
sondalex/arxiv-abstracts-2021-embeddings-10000 | sondalex | 2025-05-01T18:45:08Z | 0 | 0 | [
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:13:48Z | null | ---
license: cc0-1.0
---
# arxiv-abstracts-2021-embeddings-10000
This repository contains a subset of the [gfissore/arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021) dataset, specifically the first 10,000 samples. It includes embeddings generated from three different models.
## Dataset Structure
Each dataset contains the following columns:
- **id**: Unique identifier for each entry.
- **content**: The abstract text from the arXiv paper.
- **categories**: The categories associated with the paper.
- **embedding**: The embedding representation of the abstract.
## Available Datasets
The repository includes three datasets, each corresponding to a different embedding model:
- `data/arxiv-abstract-arcticlarge.parquet`: Embeddings generated using the Arctic Large model. Model card here: [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0)
- `data/arxiv-abstract-arcticmedium.parquet`: Embeddings generated using the Arctic Medium model. Model card here: [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0)
- `data/arxiv-abstract-minilm.parquet`: Embeddings generated using the MiniLM model. Model card here: [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
|
HappyAIUser/atma4-alpaca | HappyAIUser | 2025-05-01T18:33:58Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"alpaca-format",
"instruction-tuning",
"chat-data"
] | [] | 2025-05-01T17:07:35Z | null | ---
dataset: HappyAIUser/atma4-alpaca
tags:
- alpaca-format
- instruction-tuning
- chat-data
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4118672
num_examples: 7790
download_size: 1777889
dataset_size: 4118672
---
# Atma4-Alpaca Dataset
This is an Alpaca-formatted version of the [`HappyAIUser/Atma4`](https://huggingface.co/datasets/HappyAIUser/Atma4) dataset.
Each record contains:
- **instruction**: user prompt
- **input**: optional second context prompt
- **output**: model-generated response
Use this dataset to fine-tune LLMs on instruction-following tasks with or without input context.
|
kh4dien/qwen-completions | kh4dien | 2025-05-01T18:24:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:24:03Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: toxic
num_bytes: 842309
num_examples: 600
- name: chat
num_bytes: 2230904
num_examples: 600
download_size: 1581657
dataset_size: 3073213
configs:
- config_name: default
data_files:
- split: toxic
path: data/toxic-*
- split: chat
path: data/chat-*
---
|
Yinpei/lerobot_data_collection | Yinpei | 2025-05-01T18:17:29Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:03:07Z | null | ---
license: apache-2.0
---
|
konwoo/test-e1w0.01 | konwoo | 2025-05-01T18:16:23Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:13:31Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: log_weight
dtype: float32
splits:
- name: train
num_bytes: 3581804917
num_examples: 1500000
download_size: 2106256616
dataset_size: 3581804917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
svjack/Gosick_Videos_Omni_Captioned_1 | svjack | 2025-05-01T18:15:30Z | 0 | 0 | [
"size_categories:1K<n<10K",
"modality:text",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2025-05-01T16:28:51Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "*.mp4"
- "metadata.csv"
---



|
Aravindh25/test_13 | Aravindh25 | 2025-05-01T18:15:22Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-01T18:15:16Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_solo",
"total_episodes": 6,
"total_frames": 2426,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:6"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_2_for_gen_19 | HungVu2003 | 2025-05-01T18:13:45Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T18:13:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3906776
num_examples: 12498
download_size: 1124869
dataset_size: 3906776
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Starkosaure/Turn_Around_Object | Starkosaure | 2025-05-01T18:08:00Z | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-01T17:57:37Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# Turn_Around_Object
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
mlfoundations-dev/d1_code_multiple_languages_3k | mlfoundations-dev | 2025-05-01T18:01:38Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:59:22Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: correct
sequence: bool
- name: classifier_reasoning
dtype: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 6773958800.7
num_examples: 3160
download_size: 2760767041
dataset_size: 6773958800.7
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SKIML-ICL/med_retrieved_medllama_med_nli_adversarial_sentence | SKIML-ICL | 2025-05-01T18:00:19Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:59:51Z | null | ---
dataset_info:
config_name: adversarial
features:
- name: qid
dtype: int64
- name: norm_question
dtype: string
- name: norm_answers
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: hasanswer
dtype: bool
- name: answerable
dtype: string
- name: prompt_for_answer_gen
dtype: string
- name: answer_sentence
dtype: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: nli
dtype: string
- name: pid
dtype: int64
- name: rank
dtype: int64
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: named_entities
sequence: string
- name: input
dtype: string
- name: prompt
dtype: string
- name: adversarial_sentence
dtype: string
splits:
- name: test
num_bytes: 522348922
num_examples: 15803
download_size: 275656360
dataset_size: 522348922
configs:
- config_name: adversarial
data_files:
- split: test
path: adversarial/test-*
---
|
mlfoundations-dev/d1_code_multiple_languages_1k | mlfoundations-dev | 2025-05-01T17:59:21Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:58:39Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: correct
sequence: bool
- name: classifier_reasoning
dtype: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 2143657848.322785
num_examples: 1000
download_size: 873660666
dataset_size: 2143657848.322785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_multiple_languages_0.3k | mlfoundations-dev | 2025-05-01T17:58:37Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:58:23Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: correct
sequence: bool
- name: classifier_reasoning
dtype: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 677395880.07
num_examples: 316
download_size: 272900174
dataset_size: 677395880.07
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_8x4-1000-gsm8k-verifier | nouhad | 2025-05-01T17:56:02Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:55:53Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 106616
num_examples: 1000
download_size: 43557
dataset_size: 106616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heyIamUmair/query-classification-pakistani-legal-vs-nonlegal | heyIamUmair | 2025-05-01T17:56:02Z | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"legal",
"pakistan",
"classification",
"query-filter",
"binary-classification",
"legal-nlp",
"query-detection",
"legal-vs-nonlegal"
] | [
"text-classification"
] | 2025-05-01T17:50:13Z | null | ---
pretty_name: Legal Query Classifier (Pakistan)
tags:
- legal
- pakistan
- classification
- query-filter
- binary-classification
- legal-nlp
- query-detection
- legal-vs-nonlegal
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
---
# 🧠 Legal Query Classifier Dataset — Pakistan (Legal vs Non-Legal)
This is a **binary classification dataset** built to distinguish between **legal queries** and **non-legal queries** in the context of Pakistani law. It is designed to act as a **query filter** in legal NLP systems, chatbots, and RAG pipelines.
---
## 📁 Dataset Format
CSV with the following columns:
---
## 🔍 How to Load in Python
```python
from datasets import load_dataset
ds = load_dataset("heyIamUmair/query-classification-pakistani-legal-vs-nonlegal", data_files="query_classification.csv", split="train")
print(ds[0])
|
nouhad/multiplication_train_1000_7x6-1000-gsm8k-verifier | nouhad | 2025-05-01T17:55:43Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:55:42Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 109612
num_examples: 1000
download_size: 45951
dataset_size: 109612
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_7x4-1000-gsm8k-verifier | nouhad | 2025-05-01T17:55:39Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:55:38Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 103588
num_examples: 1000
download_size: 41150
dataset_size: 103588
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_6x4-1000-gsm8k-verifier | nouhad | 2025-05-01T17:55:31Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:55:29Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 100664
num_examples: 1000
download_size: 38547
dataset_size: 100664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_5x3-1000-gsm8k-verifier | nouhad | 2025-05-01T17:55:17Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:55:16Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 94608
num_examples: 1000
download_size: 33567
dataset_size: 94608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_5x2-1000-gsm8k-verifier | nouhad | 2025-05-01T17:55:14Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:55:12Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 91618
num_examples: 1000
download_size: 30284
dataset_size: 91618
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_all_large_1k | mlfoundations-dev | 2025-05-01T17:52:00Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:51:56Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 178417933.13291138
num_examples: 1000
download_size: 75805633
dataset_size: 178417933.13291138
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_8x2 | nouhad | 2025-05-01T17:51:44Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:51:43Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 109664
num_examples: 1000
download_size: 37754
dataset_size: 109664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_7x4 | nouhad | 2025-05-01T17:51:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:51:40Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 112588
num_examples: 1000
download_size: 41195
dataset_size: 112588
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_7x3 | nouhad | 2025-05-01T17:51:40Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:51:39Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 109626
num_examples: 1000
download_size: 38659
dataset_size: 109626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouhad/multiplication_train_1000_5x3 | nouhad | 2025-05-01T17:51:28Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:51:27Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 103608
num_examples: 1000
download_size: 33612
dataset_size: 103608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_all_10k | mlfoundations-dev | 2025-05-01T17:50:37Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:50:06Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1781021654.7468355
num_examples: 10000
download_size: 746514452
dataset_size: 1781021654.7468355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_all_3k | mlfoundations-dev | 2025-05-01T17:50:06Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:49:57Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 562802842.9
num_examples: 3160
download_size: 235080575
dataset_size: 562802842.9
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_all_1k | mlfoundations-dev | 2025-05-01T17:49:57Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:49:53Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 178102165.47468355
num_examples: 1000
download_size: 74856874
dataset_size: 178102165.47468355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_fasttext_3k | mlfoundations-dev | 2025-05-01T17:48:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:48:28Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: question_answer_string
dtype: string
- name: _fasttext_score
dtype: float64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 667022951.7
num_examples: 3160
download_size: 279460014
dataset_size: 667022951.7
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Gwanwoo/kor_eng_3_1 | Gwanwoo | 2025-05-01T17:44:15Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:29:59Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: dump
dtype: string
- name: segment
dtype: string
- name: image_urls
sequence:
sequence: string
splits:
- name: train
num_bytes: 256173544
num_examples: 59673
download_size: 150340333
dataset_size: 256173544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-aime-2024-with-prm-test | kaiwenw | 2025-05-01T17:43:44Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:43:42Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 13468758
num_examples: 100
download_size: 3145869
dataset_size: 13468758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_longest_3k | mlfoundations-dev | 2025-05-01T17:40:49Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:38:41Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 6912959890.4
num_examples: 3160
download_size: 2807261161
dataset_size: 6912959890.4
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_longest_1k | mlfoundations-dev | 2025-05-01T17:38:41Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:37:58Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 2187645534.936709
num_examples: 1000
download_size: 905469339
dataset_size: 2187645534.936709
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_code_longest_0.3k | mlfoundations-dev | 2025-05-01T17:37:56Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:37:41Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 691295989.04
num_examples: 316
download_size: 284672345
dataset_size: 691295989.04
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LuminaAI/Pima_Indians-Tabular | LuminaAI | 2025-05-01T17:35:15Z | 0 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-05-01T16:13:49Z | null | ---
license: mit
---
## Pima Indians Tabular Data RCL Dataset
### Overview
This dataset contains tabular data structured explicitly for classification tasks using Lumina AI's Random Contrast Learning (RCL) algorithm via the PrismRCL application. Unlike textual or imaging datasets, tabular datasets contain numeric or categorical data organized in individual `.txt` files with space-separated values.
### Dataset Structure
The dataset structure for tabular classification training:
```
pima-indians_data/
train/
[class_1]/
sample_001.txt
sample_002.txt
...
[class_2]/
sample_001.txt
sample_002.txt
...
test/
[class_1]/
sample_001.txt
sample_002.txt
...
[class_2]/
sample_001.txt
sample_002.txt
...
```
- **Classes:** Folder names represent distinct data classes.
- **Tabular Samples:** Each `.txt` file represents a single data sample with features as space-separated values.
### Tabular Data Preparation
For tabular datasets, PrismRCL has specific preparation requirements:
- Data samples must be in `.txt` format.
- Each file should contain a single line with space-separated features.
- No normalization of numerical values is required when using PrismRCL version 2.4.0 or later.
- File names must be unique across all class folders.
### Usage (Tabular-specific)
Use PrismRCL for training with tabular data:
```
C:\PrismRCL\PrismRCL.exe naivebayes rclticks=10 ^
data=C:\path\to\pima-indians_data\train testdata=C:\path\to\pima-indians_data\test ^
savemodel=C:\path\to\models\pima_indians_model.classify ^
log=C:\path\to\log_files stopwhendone
```
### Explanation of Command
- **naivebayes:** Specifies Naive Bayes as the evaluation method for tabular classification.
- **rclticks:** Number of RCL iterations during training.
- **data & testdata:** Paths to training and testing tabular datasets.
- **savemodel:** Output path for the trained classification model.
- **log:** Directory for storing log files.
- **stopwhendone:** Automatically terminates the session after training completion.
### Auto Optimize
PrismRCL includes an **Auto Optimize** feature designed to automatically identify optimal training parameters for your specific dataset, significantly streamlining the model training process. This feature removes the need for manual parameter tuning by systematically evaluating your data to determine the most effective settings for evaluation method, `rclticks`, `boxdown`, and other relevant parameters.
**How to Use Auto Optimize:**
Run the following command with your dataset:
```cmd
C:\PrismRCL\PrismRCL.exe auto-optimize data=C:\path\to\your_dataset\train log=C:\path\to\log_files
```
**Explanation:**
- **auto-optimize:** Initiates PrismRCL’s parameter optimization process.
- **data:** Path to your training dataset.
- **log:** Specifies the directory where PrismRCL will save a detailed summary file with optimal parameters determined by the optimization process.
After execution, PrismRCL generates an optimization summary file in your specified log directory (`_optimize_summary_mm_dd_yy_hh_mm_ss.txt`). This file will list the optimal parameters, which you should then apply in your training commands to achieve optimal model performance.
### License
This dataset is licensed under the MIT License.
### Original Source
Prepared explicitly by Lumina AI for RCL-based tabular data classification training. Please credit Lumina AI when using this dataset in research or applications.
### Additional Information
Refer to the PrismRCL Technical Documentation v2.6.2 for more detailed guidance on tabular data preparation and parameter specifications.
|
LuminaAI/Satellite_4_Class-Image | LuminaAI | 2025-05-01T17:26:30Z | 0 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-05-01T16:15:59Z | null | ---
license: mit
---
## Satellite Imaging RCL Dataset
### Overview
This dataset contains satellite images structured explicitly for classification tasks using Lumina AI's Random Contrast Learning (RCL) algorithm via the PrismRCL application. Unlike LLM datasets, imaging datasets contain individual .png files organized by class.
### Dataset Structure
The dataset structure for image classification training:
```
satellite2-png/
train/
[class_1]/
image_001.png
image_002.png
...
[class_2]/
image_001.png
image_002.png
...
test/
[class_1]/
image_001.png
image_002.png
...
[class_2]/
image_001.png
image_002.png
...
```
- **Classes:** Folder names represent distinct image classes.
- **Images:** Each image file (.png) represents a single data sample.
### Image Data Preparation
For image datasets, PrismRCL has specific preparation requirements:
- Images must be in .png format.
- No resizing or normalization is required when using PrismRCL version 2.4.0 or later.
- File names must be unique across all class folders.
### Usage (Image-specific)
Use PrismRCL for training with image data:
```
C:\PrismRCL\PrismRCL.exe chisquared rclticks=10 boxdown=0 ^
data=C:\path\to\satellite2-png\train testdata=C:\path\to\satellite2-png\test ^
savemodel=C:\path\to\models\satellite_image_model.classify ^
log=C:\path\to\log_files stopwhendone
```
### Explanation of Command
- **chisquared:** Specifies Chi-squared as the evaluation method for training.
- **rclticks:** Number of RCL iterations during training.
- **boxdown:** RCL-specific training parameter.
- **data & testdata:** Paths to training and testing image datasets.
- **savemodel:** Output path for the trained classification model.
- **log:** Directory for storing log files.
- **stopwhendone:** Automatically terminates the session after training completion.
### License
This dataset is licensed under the MIT License.
### Original Source
Prepared explicitly by Lumina AI for RCL-based image classification training. Please credit Lumina AI when using this dataset in research or applications.
### Additional Information
Refer to the PrismRCL Technical Documentation v2.6.2 for more detailed guidance on imaging data preparation and parameter specifications. |
LuminaAI/A_Christmas_Carol-LLM | LuminaAI | 2025-05-01T17:22:19Z | 0 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-05-01T16:02:12Z | null | ---
license: mit
---
## A Christmas Carol RCL LLM Dataset
### Overview
This dataset is explicitly structured for training Large Language Models (LLMs) using Lumina AI's Random Contrast Learning (RCL) algorithm via the PrismRCL application. Unlike standard classification datasets, LLM datasets require textual data formatted into input sequences and corresponding target tokens.
### Dataset Structure
For LLM training, the dataset structure differs significantly from traditional classification datasets:
```
a-christmas-carol-rcl-mm/
train/
[class_token_1]/
values.txt
[class_token_2]/
values.txt
...
test/
[class_token_1]/
values.txt
[class_token_2]/
values.txt
...
```
- **Class tokens:** Folder names represent the target token for sequences.
- **values.txt:** Each line within `values.txt` files represents an individual input sequence mapping to the target token of its containing folder.
### LLM Data Preparation
PrismRCL requires LLM datasets to follow specific formatting distinct from classification tasks:
- Clean raw text data (removing overly long or non-printable characters).
- Create input sequences with a sliding-window method. For instance, a 4-token input sequence predicts the 5th token.
- Each input sequence is stored as a single line within the class-specific `values.txt` files.
**Example:**\
Original text: "Marley was dead: to begin with."
- Input: "Marley was dead: to" → Target: "begin"
- Input: "was dead: to begin" → Target: "with"
### Usage (LLM-specific)
Use PrismRCL's `llm` parameter for LLM-specific training:
```
C:\PrismRCL\PrismRCL.exe llm naivebayes directional rclticks=67 readtextbyline ^
data=C:\path\to\a-christmas-carol-rcl-mm\train testdata=C:\path\to\a-christmas-carol-rcl-mm\test ^
savemodel=C:\path\to\models\christmas_carol_llm.classify ^
log=C:\path\to\log_files stopwhendone
```
### Explanation of Command
- **llm:** Specifies the dataset as an LLM training dataset.
- **naivebayes:** Evaluation method suitable for LLM data.
- **directional:** Maintains token order, essential for language modeling.
- **rclticks:** Sets RCL discretization granularity.
- **readtextbyline:** Treats each line in the text files as separate data samples.
- **data & testdata:** Paths to training and testing datasets.
- **savemodel:** Output path for the trained LLM model.
- **log:** Directory for storing log files.
- **stopwhendone:** Automatically terminates the session after training completion.
### License
This dataset is licensed under the MIT License.
### Original Source
Prepared explicitly by Lumina AI for RCL-based LLM training. Please credit Lumina AI when using this dataset in research or applications.
### Additional Information
Refer to the PrismRCL Technical Documentation v2.6.2 for more detailed guidance on LLM data preparation and parameter specifications.
|
mlfoundations-dev/d1_code_shortest_0.3k | mlfoundations-dev | 2025-05-01T17:22:12Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:21:58Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 650328556.24
num_examples: 316
download_size: 256251526
dataset_size: 650328556.24
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LuminaAI/Doctrina_Christiana-LLM | LuminaAI | 2025-05-01T17:20:53Z | 0 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-05-01T16:02:54Z | null | ---
license: mit
---
## Doctrina Christiana RCL LLM Dataset
### Overview
This dataset is explicitly structured for training Large Language Models (LLMs) using Lumina AI's Random Contrast Learning (RCL) algorithm via the PrismRCL application. Unlike standard classification datasets, LLM datasets require textual data formatted into input sequences and corresponding target tokens.
### Dataset Structure
For LLM training, the dataset structure differs significantly from traditional classification datasets:
```
doctrina-christiana-rcl-mm/
train/
[class_token_1]/
values.txt
[class_token_2]/
values.txt
...
test/
[class_token_1]/
values.txt
[class_token_2]/
values.txt
...
```
- **Class tokens:** Folder names represent the target token for sequences.
- **values.txt:** Each line within `values.txt` files represents an individual input sequence mapping to the target token of its containing folder.
### LLM Data Preparation
PrismRCL requires LLM datasets to follow specific formatting distinct from classification tasks:
- Clean raw text data (removing overly long or non-printable characters).
- Create input sequences with a sliding-window method. For instance, a 4-token input sequence predicts the 5th token.
- Each input sequence is stored as a single line within the class-specific `values.txt` files.
**Example:**\
Original text: "Ama a tu prójimo como a ti mismo."
- Input: "Ama a tu prójimo" → Target: "como"
- Input: "a tu prójimo como" → Target: "a"
### Usage (LLM-specific)
Use PrismRCL's `llm` parameter for LLM-specific training:
```
C:\PrismRCL\PrismRCL.exe llm naivebayes directional rclticks=67 readtextbyline ^
data=C:\path\to\doctrina-christiana-rcl-mm\train testdata=C:\path\to\doctrina-christiana-rcl-mm\test ^
savemodel=C:\path\to\models\doctrina_christiana_llm.classify ^
log=C:\path\to\log_files stopwhendone
```
### Explanation of Command
- **llm:** Specifies the dataset as an LLM training dataset.
- **naivebayes:** Evaluation method suitable for LLM data.
- **directional:** Maintains token order, essential for language modeling.
- **rclticks:** Sets RCL discretization granularity.
- **readtextbyline:** Treats each line in the text files as separate data samples.
- **data & testdata:** Paths to training and testing datasets.
- **savemodel:** Output path for the trained LLM model.
- **log:** Directory for storing log files.
- **stopwhendone:** Automatically terminates the session after training completion.
### License
This dataset is licensed under the MIT License.
### Original Source
Prepared explicitly by Lumina AI for RCL-based LLM training. Please credit Lumina AI when using this dataset in research or applications.
### Additional Information
Refer to the PrismRCL Technical Documentation v2.6.2 for more detailed guidance on LLM data preparation and parameter specifications.
|
osama24sy/llama3.2-3b-it-10k-qwen-singleturn-onesolution-64-results-20250501-17461194036674 | osama24sy | 2025-05-01T17:17:39Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:17:37Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 249314
num_examples: 150
download_size: 106742
dataset_size: 249314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LuminaAI/Pride_and_Prejudice-LLM | LuminaAI | 2025-05-01T17:16:14Z | 0 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-05-01T16:04:46Z | null | ---
license: mit
---
## Pride and Prejudice RCL LLM Dataset
### Overview
This dataset is explicitly structured for training Large Language Models (LLMs) using Lumina AI's Random Contrast Learning (RCL) algorithm via the PrismRCL application. Unlike standard classification datasets, LLM datasets require textual data formatted into input sequences and corresponding target tokens.
### Dataset Structure
For LLM training, the dataset structure differs significantly from traditional classification datasets:
```
pride-and-prejudice-rcl-mm/
train/
[class_token_1]/
values.txt
[class_token_2]/
values.txt
...
test/
[class_token_1]/
values.txt
[class_token_2]/
values.txt
...
```
- **Class tokens:** Folder names represent the target token for sequences.
- **values.txt:** Each line within `values.txt` files represents an individual input sequence mapping to the target token of its containing folder.
### LLM Data Preparation
PrismRCL requires LLM datasets to follow specific formatting distinct from classification tasks:
- Clean raw text data (removing overly long or non-printable characters).
- Create input sequences with a sliding-window method. For instance, a 4-token input sequence predicts the 5th token.
- Each input sequence is stored as a single line within the class-specific `values.txt` files.
**Example:**\
Original text: "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."
- Input: "It is a truth universally" → Target: "acknowledged"
- Input: "is a truth universally acknowledged," → Target: "that"
### Usage (LLM-specific)
Use PrismRCL's `llm` parameter for LLM-specific training:
```
C:\PrismRCL\PrismRCL.exe llm naivebayes directional rclticks=67 readtextbyline ^
data=C:\path\to\pride-and-prejudice-rcl-mm\train testdata=C:\path\to\pride-and-prejudice-rcl-mm\test ^
savemodel=C:\path\to\models\pride_prejudice_llm.classify ^
log=C:\path\to\log_files stopwhendone
```
### Explanation of Command
- **llm:** Specifies the dataset as an LLM training dataset.
- **naivebayes:** Evaluation method suitable for LLM data.
- **directional:** Maintains token order, essential for language modeling.
- **rclticks:** Sets RCL discretization granularity.
- **readtextbyline:** Treats each line in the text files as separate data samples.
- **data & testdata:** Paths to training and testing datasets.
- **savemodel:** Output path for the trained LLM model.
- **log:** Directory for storing log files.
- **stopwhendone:** Automatically terminates the session after training completion.
### License
This dataset is licensed under the MIT License.
### Original Source
Prepared explicitly by Lumina AI for RCL-based LLM training. Please credit Lumina AI when using this dataset in research or applications.
### Additional Information
Refer to the PrismRCL Technical Documentation v2.6.2 for more detailed guidance on LLM data preparation and parameter specifications.
|
mlfoundations-dev/d1_code_gpt_10k | mlfoundations-dev | 2025-05-01T17:14:46Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-01T17:08:15Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction_seed
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: index
dtype: string
- name: _source
dtype: string
- name: difficulty_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: ms_id
dtype: int64
- name: reasoning
sequence: string
- name: deepseek_solution
sequence: string
- name: final_reasoning_trace
sequence: string
- name: correct
sequence: bool
- name: classifier_reasoning
dtype: string
- name: _majority_responses
sequence: string
- name: verified_final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 21030101607.27848
num_examples: 10000
download_size: 8567374107
dataset_size: 21030101607.27848
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.