datasetId
large_stringlengths 6
107
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-05 16:11:44
| downloads
int64 0
4.28M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-05 14:04:15
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
SylvanL/Traditional-Chinese-Medicine-Dataset-SFT | SylvanL | 2024-10-26T10:47:40Z | 478 | 70 | [
"task_categories:table-question-answering",
"language:zh",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"medical"
] | [
"table-question-answering"
] | 2024-10-02T08:04:26Z | 2 | ---
license: apache-2.0
task_categories:
- table-question-answering
language:
- zh
tags:
- medical
size_categories:
- 1B<n<10B
---
# 启古纳今,厚德精术
---
## 数据介绍
## 非网络来源的高质量中医数据集-指令微调
## High-Quality Traditional Chinese Medicine Dataset from Non-Internet Sources - SFT/IFT
该数据集经过大量人力和资源的投入精心构建,以共建LLM高质量中文社区为己任。
包含约1GB的中医各个领域临床案例、名家典籍、医学百科,名词解释等优质问答内容,涵盖全面,配比均衡。
数据集主要由非网络来源的内部数据构成,并99%为简体中文内容,内容质量优异,信息密度可观。
该数据集的数据源与SylvanL/Traditional-Chinese-Medicine-Dataset-Pretrain中的内容存在一定关联,但不高度重叠。
在二者的构建过程中,存在着一定的循序渐进与互为补充的逻辑.
该数据集可以独立使用,但建议先使用配套的预训练数据集对模型进行继续预训练后,再使用该数据集进行进一步的指令微调。
该数据集仅为单轮对话,且只涉及中医知识与通用医疗知识,因此强烈建议使用者在训练时混入一定比例的【中文常识、中文多轮对话数据】来避免灾难性遗忘。(未包含于该数据集内,可选择"BAAI/COIG"与"m-a-p/COIG-CQIA")
并且因为中医领域的特殊性,使用者也可以按照自己的业务需求酌情混入一定比例的【古文/文言文<->现代文翻译数据】以进一步加强模型表现。(已经在下方一并提供)
所有数据集的可是均可被LLamaFactory按照"dataset_info.json"内的信息(需要与数据集文件在同一路径下)直接读取,list:dict->[{"instruction":"...", "input":"...", "output":"..."},...]
任何问题请联系:[email protected]
| **文件名** | **数据细节** | **备注** | **是否有曾公开过**
|----------------------------------|----------------------------------|----------------------------------|----------------------------------|
| SFT_medicalKnowledge_source1_548404 | 原始数据包含来自"中国中医药信息查询平台"数据库的17,921条结构化信息,涵盖疾病、症状、医疗美容、药品、中药材、保健品、方剂、药膳食疗、针灸穴位、术语等的百科词条。原始内容全部经过人工校对,质量极高。由"Qwen2.5-14B"进行本地处理QA化成548,404条数据,基于下文【附录四】内的提示词工程。 | 未进行minHash,且认为没有进行minHash的必要。 | 网络来源数据,内部处理校对。 |
| SFT_medicalKnowledge_source2_99334 | 原始数据包含来自,但不限于ICD-10术语和中医国标所囊括的病症与术语的12,889条相关解释,同时涵盖常见方剂、中药材、针灸配穴等各种中医术语和名词的详细解释。原始内容全部经由人工构建,质量极高。由"deepseek-chat"API进行本地处理QA化成99,334条数据,基于"附录四"内的提示词工程。| 已进行一次minHash(threshold=0.91596, num_perm=128),且认为没有进行二次minHash的必要。 | 内部数据,未曾面世。 |
| SFT_medicalKnowledge_source3_556540 | 原始数据包含来自688本中医领域常用教材、著作、典籍的146244条书本内容。从pdf、word、图片、html以及chm等原始文件格式整理出。具体书录详见【SylvanL/Traditional-Chinese-Medicine-Dataset-Pretrain 附录一】。由"DeepSeek-V2-Lite-Chat"进行本地处理QA化成556,540条数据,基于"附录四"内的提示词工程。| 未进行minHash,且认为没有进行minHash的必要。 | 网络来源数据,内部处理校对。 |
| SFT_nlpDiseaseDiagnosed_61486 | 原始数据包含61,486条来自高水平执业中医医生的临床问诊记录,原内容为自然语言优质医案,包含患者问诊引导、病症、主诉、诊断、方剂、中药等处方笺必备内容。内容完全由人工编辑,质量极高。由规则直接生成,映射关系为:{'instruction': '基于输入的患者医案记录,直接给出你的疾病诊断,无需给出原因。','input': 自然语言的患者病历记录, 'output': 诊断的疾病,具体清单详见【附录表二】} | 未进行minHash,且认为没有进行minHash的必要。 | 内部数据,未曾面世。 |
| SFT_nlpSyndromeDiagnosed_48665 | 原始数据包含48,665条来自某知名教授的临床问诊记录,原内容为自然语言优质医案,包含患者问诊引导、病症、主诉、诊断、证型、方剂、中药等处方笺必备内容。内容完全由人工编辑,质量极高。由规则直接生成,映射关系为:{'instruction': '基于输入的患者医案记录,直接给出你的证型诊断,无需给出原因。','input': 自然语言的患者病历记录, 'output': 诊断的证型,具体清单详见【附录表一】} | 未进行minHash,且认为没有进行minHash的必要。 | 内部数据,未曾面世。 |
| SFT_structGeneral_310860 | 原始数据包含从某互联网医院数据库中获取的部分结构化医案(1,592,410条医案),去重处理后剩余310,860条。内容源头由医生线上问诊时由本人/学生点选或键入。内容由规则联表拼接,质量中规中矩。由规则直接生成,映射关系为:{'instruction': '基于输入的患者医案记录,直接给出你认为的【治疗方案】:中药、方剂,和【诊断】:证型。','input': 半结构化的患者病历记录, 'output': 半结构化的通用诊断,具体构成详见【附录表三】} | 已进行一次minHash(threshold=0.618, num_perm=128),可酌情进行二次minHash。 | 内部数据,未曾面世。 |
| SFT_structPrescription_92896 |原始数据包含92,896条从某中医院EMR系统中获得的部分归档医案,内容源头由医生线下问诊时由本人/学生点选或键入。内容由规则联表拼接,质量中规中矩。由规则直接生成,映射关系为:{'instruction': '基于输入的患者医案记录,直接给出你认为的方剂中药组成。','input': 半结构化的患者病历记录, 'output': 半结构化的最终开具处方}| 未进行minHash,且认为没有进行minHash的必要。 | 内部数据,未曾面世。 |
| 外部数据集 __SFT_traditionalTrans_1959542_ | 古文/文言文<->现代文翻译数据,由规则直接生成,映射关系为:{"instruction": "将输入的古文翻译成现代文。", "input": "古文:", "output": "现代文:"}、{"instruction": "请为输入的现代文找到其对应的古文原文与出处。", "input": "现代文:", "output": "内容出自:" + "古文原文:"}| https://github.com/NiuTrans/Classical-Modern | 网络来源数据,内部处理校对。 |
---
# 附录表一:SFT_nlpSyndromeDiagnosed_48665 证型诊断样本分布情况
| 证型 | 样本数量 |
| --- | --- |
| 气虚血瘀证 | 8303 |
| 湿热下注证 | 7843 |
| 气滞血瘀证 | 4791 |
| 风痰瘀阻证 | 3358 |
| 肝胃郁热证 | 2009 |
| 正虚瘀结证 | 1993 |
| 痰湿蕴肺证 | 1875 |
| 肝肾亏虚证 | 1457 |
| 热毒蕴结证 | 1292 |
| 脾肾两虚证 | 1022 |
| 气阴两虚证 | 860 |
| 湿热瘀阻证 | 622 |
| 肾虚证 | 593 |
| 肝胃不和证 | 546 |
| 风寒袭肺证 | 526 |
| 脾胃虚寒证 | 522 |
| 湿热蕴结证 | 520 |
| 阳虚水泛证 | 504 |
| 痰瘀痹阻证 | 477 |
| 心脉痹阻证 | 446 |
| 风痰上扰证 | 414 |
| 风寒外袭证 | 396 |
| 风痰入络证 | 344 |
| 瘀血阻络证 | 315 |
| 痰瘀互结证 | 309 |
| 阴虚阳亢证 | 299 |
| 痰浊蒙窍证 | 282 |
| 气血亏虚证 | 218 |
| 肝阳上亢证 | 209 |
| 痰湿蒙窍证 | 208 |
| 肝肾阴虚证 | 194 |
| 痰湿中阻证 | 185 |
| 阴虚火旺证 | 180 |
| 血瘀痰凝证 | 175 |
| 脾胃虚弱证 | 174 |
| 肝经湿热证 | 169 |
| 正虚毒瘀证 | 160 |
| 血热证 | 156 |
| 脾胃不和证 | 144 |
| 痰热壅肺证 | 138 |
| 阴虚血瘀证 | 130 |
| 湿热阻络证 | 125 |
| 肝郁气滞证 | 123 |
| 湿热内蕴证 | 112 |
| 心血虚证 | 110 |
| 寒凝血瘀证 | 99 |
| 脾肾阳虚证 | 94 |
| 阳虚血瘀证 | 93 |
| 风寒湿痹证 | 92 |
| 瘀阻心脉证 | 91 |
| 肝郁血瘀证 | 90 |
| 肝阴不足证 | 89 |
| 痰瘀滞络证 | 88 |
| 气虚不摄证 | 80 |
| 正虚毒结证 | 78 |
| 气滞证 | 75 |
| 肝气犯胃证 | 73 |
| 心肾不交证 | 72 |
| 瘀血阻滞证 | 71 |
| 饮食积滞证 | 70 |
| 气血不足证 | 69 |
| 脾虚湿盛证 | 68 |
| 脾胃积热证 | 68 |
| 风热犯肺证 | 66 |
| 肾虚水泛证 | 66 |
| 心肺气虚证 | 65 |
| 肺肾气虚证 | 63 |
| 痰浊内蕴证 | 62 |
| 气虚痰结证 | 60 |
| 风寒阻络证 | 60 |
| 气血两虚证 | 54 |
| 痰湿内阻证 | 54 |
| 痰火扰心证 | 54 |
| 津亏热结证 | 53 |
| 风寒袭表证 | 52 |
| 肺卫不固证 | 48 |
| 寒湿阻络证 | 46 |
| 肝郁脾虚证 | 45 |
| 血虚风燥证 | 43 |
| 肾气虚证 | 40 |
| 阴虚津亏证 | 37 |
| 大肠湿热证 | 37 |
| 肝胆湿热证 | 36 |
| 脾虚湿蕴证 | 36 |
| 热毒炽盛证 | 36 |
| 气血瘀滞证 | 34 |
| 阴虚血热证 | 33 |
| 肾虚肝亢证 | 33 |
| 脾气虚证 | 33 |
| 阴虚风动证 | 33 |
| 痰浊中阻证 | 32 |
| 风寒湿阻证 | 30 |
| 中气下陷证 | 29 |
| 风热毒蕴证 | 28 |
| 肾虚寒凝证 | 28 |
| 风热袭表证 | 28 |
| 湿热蕴毒证 | 27 |
| 脾胃气虚证 | 27 |
| 风湿蕴肤证 | 27 |
| 风寒痹阻证 | 27 |
| 气虚证 | 27 |
| 阳虚寒凝证 | 26 |
| 痰湿证 | 26 |
| 中气不足证 | 26 |
| 气滞痰凝证 | 25 |
| 心虚胆怯证 | 25 |
| 气郁痰阻证 | 24 |
| 痰蒙清窍证 | 24 |
| 风热伤络证 | 23 |
| 痰蒙神窍证 | 22 |
| 风热血热证 | 22 |
| 肝胃气滞证 | 22 |
| 血虚证 | 22 |
| 痰浊上蒙证 | 21 |
| 髓海不足证 | 20 |
| 脾胃阳虚证 | 20 |
| 风湿热证 | 20 |
| 热毒蕴肤证 | 18 |
| 痰湿阻滞证 | 18 |
| 肝肾亏损证 | 18 |
| 外伤损络证 | 18 |
| 痰热内扰证 | 18 |
| 气不摄血证 | 18 |
| 瘀滞胞宫证 | 17 |
| 肝肾两虚证 | 17 |
| 心阳不振证 | 17 |
| 肠燥津伤证 | 16 |
| 脾肾气虚证 | 16 |
| 风热外袭证 | 16 |
| 气虚血溢证 | 14 |
| 肾不纳气证 | 14 |
| 气阴亏虚证 | 13 |
| 肺阴虚证 | 13 |
| 心脾两虚证 | 13 |
| 湿毒蕴结证 | 12 |
| 风痰闭窍证 | 12 |
| 湿热瘀滞证 | 12 |
| 阴虚湿热证 | 11 |
| 寒痰阻肺证 | 11 |
| 热毒壅结证 | 11 |
| 痰气交阻证 | 11 |
| 毒邪流窜证 | 10 |
| 风痰闭阻证 | 10 |
| 脾胃湿热证 | 10 |
| 肾阳虚证 | 9 |
| 瘀血内阻证 | 9 |
| 阳虚痰凝证 | 9 |
| 气血失调证 | 9 |
# 附录表二: SFT_nlpDiseaseDiagnosed_61486 疾病诊断样本分布情况
| 疾病 | 样本数量 |
| --- | --- |
| 咳嗽 | 14468 |
| 便秘 | 5395 |
| 失眠 | 4036 |
| 胃炎 | 3477 |
| 痛经 | 3252 |
| 高血压 | 2085 |
| 肝炎 | 1688 |
| 冠心病 | 1418 |
| 糖尿病 | 1143 |
| 肾炎 | 1072 |
| 紫癜 | 886 |
| 不孕症 | 814 |
| 痤疮 | 704 |
| 湿疹 | 612 |
| 胆囊炎 | 579 |
| 肝硬化 | 561 |
| 口腔溃疡 | 555 |
| 溃疡性结肠炎 | 535 |
| 呼吸道感染 | 492 |
| 心律失常 | 488 |
| 荨麻疹 | 465 |
| 鼻炎 | 450 |
| 前列腺炎 | 405 |
| 类风湿关节炎 | 398 |
| 盆腔炎 | 388 |
| 银屑病 | 385 |
| 子宫肌瘤 | 379 |
| 颈椎病 | 379 |
| 贫血 | 374 |
| 不育 | 373 |
| 脂肪肝 | 329 |
| 多囊卵巢综合征 | 323 |
| 类风湿性关节炎 | 320 |
| 更年期综合征 | 314 |
| 肾病综合征 | 311 |
| 乳腺增生 | 296 |
| 乳腺癌 | 293 |
| 咽炎 | 285 |
| 偏头痛 | 281 |
| 肠癌 | 276 |
| 糖尿病肾病 | 276 |
| 带状疱疹 | 262 |
| 抑郁症 | 260 |
| 胃癌 | 253 |
| 强直性脊柱炎 | 243 |
| 红斑狼疮 | 239 |
| 卵巢囊肿 | 201 |
| 腰椎间盘突出症 | 191 |
| 肝癌 | 191 |
| 肠易激综合征 | 189 |
| 小儿咳嗽 | 185 |
| 黄褐斑 | 185 |
| 肺炎 | 184 |
| IgA肾病 | 174 |
| 尿路感染 | 165 |
| 胃溃疡 | 165 |
| 月经失调 | 159 |
| 十二指肠溃疡 | 149 |
| 子宫内膜异位症 | 148 |
| 痔疮 | 144 |
| 肾功能衰竭 | 143 |
| 反流性食管炎 | 142 |
| 肺结核 | 138 |
| 支气管哮喘 | 128 |
| 痛风性关节炎 | 128 |
| 肺癌 | 123 |
| 功能失调性子宫出血 | 120 |
| 扁桃体炎 | 115 |
| 风湿性关节炎 | 115 |
| 骨质疏松 | 115 |
| 肾结石 | 109 |
| 脑梗塞 | 108 |
| 先兆流产 | 106 |
| 三叉神经痛 | 102 |
| 脑出血 | 101 |
| 乳腺炎 | 99 |
| 高脂血症 | 92 |
| 慢性肾衰竭 | 92 |
| 胆结石 | 91 |
| 帕金森病 | 91 |
| 皮肤瘙痒症 | 89 |
| 肺气肿 | 86 |
| 功能性消化不良 | 85 |
| 阴道炎 | 83 |
| 白癜风 | 81 |
| 胰腺炎 | 80 |
| 创伤 | 80 |
| 心绞痛 | 80 |
| 坐骨神经痛 | 78 |
| 风湿性心脏病 | 77 |
| 食管癌 | 77 |
| 泌尿系感染 | 75 |
| 精神分裂症 | 75 |
| 鼻咽癌 | 74 |
| 小儿腹泻 | 71 |
| 早产 | 70 |
| 胆囊结石 | 70 |
| 慢性肾功能不全 | 69 |
| 鼻窦炎 | 66 |
| 卵巢癌 | 66 |
| 小儿肺炎 | 66 |
| 慢性支气管炎 | 64 |
| 支气管扩张 | 60 |
| 梅尼埃病 | 59 |
| 急性肾小球肾炎 | 59 |
| 高尿酸血症 | 58 |
| 上消化道出血 | 58 |
| 小儿厌食症 | 58 |
| 焦虑症 | 57 |
| 低血压 | 57 |
| 小儿感冒 | 55 |
| 面神经炎 | 55 |
| 小儿哮喘 | 54 |
| 百日咳 | 54 |
| 胰腺癌 | 54 |
| 夜惊 | 53 |
| 肝囊肿 | 52 |
| 急性阑尾炎 | 50 |
| 白塞病 | 50 |
| 输尿管结石 | 47 |
| 心肌梗死 | 45 |
| 小儿癫痫 | 44 |
| 性早熟 | 44 |
| 视神经萎缩 | 43 |
| 消化性溃疡 | 43 |
| 尿毒症 | 41 |
| 支气管炎 | 41 |
| 附件炎 | 40 |
| 胸腔积液 | 40 |
| 先天性心脏病 | 38 |
| 多发性骨髓瘤 | 38 |
| 异位妊娠 | 37 |
| 腰肌劳损 | 34 |
| 心房颤动 | 34 |
| 脂溢性皮炎 | 32 |
| 细菌性痢疾 | 32 |
| 心功能不全 | 30 |
| 婴儿湿疹 | 29 |
| 急性白血病 | 29 |
| 宫颈炎 | 27 |
| 白内障 | 26 |
| 流行性腮腺炎 | 26 |
| 单纯性甲状腺肿 | 26 |
| 经前期综合征 | 25 |
| 慢性阻塞性肺疾病 | 24 |
| 垂体瘤 | 24 |
| 真性红细胞增多症 | 24 |
| 早产儿 | 24 |
| 排卵期出血 | 23 |
| 多发性神经炎 | 22 |
| 手足口病 | 21 |
| 尿道炎 | 20 |
| 慢性心力衰竭 | 17 |
| 冠状动脉粥样硬化性心脏病 | 15 |
| 肝脓肿 | 15 |
| 尿路结石 | 13 |
| 牙周炎 | 13 |
| 视神经炎 | 13 |
| 婴幼儿腹泻 | 12 |
| 慢性阻塞性肺病 | 12 |
| 急性咽炎 | 12 |
| 小儿多动症 | 11 |
| 阿尔茨海默病 | 11 |
| 呼吸衰竭 | 10 |
| 肝性脑病 | 10 |
| 二尖瓣狭窄 | 9 |
| 代谢综合征 | 9 |
| 雷诺综合征 | 9 |
| 新生儿黄疸 | 9 |
| 骨肿瘤 | 8 |
| 慢性肺源性心脏病 | 8 |
| 肺性脑病 | 8 |
| 小儿疝气 | 7 |
| 急性化脓性中耳炎 | 6 |
| 新生儿肺炎 | 6 |
| 自闭症 | 6 |
| 慢性喉炎 | 6 |
| 血管性水肿 | 6 |
| 血友病 | 6 |
| 慢性粒细胞白血病 | 5 |
| 唇癌 | 4 |
| 缺血性结肠炎 | 3 |
| 脊髓灰质炎 | 3 |
| 急性呼吸窘迫综合征 | 3 |
| 儿童孤独症 | 3 |
| 细菌性阴道病 | 2 |
| 小儿惊厥 | 2 |
| 唇腭裂 | 2 |
| 弥散性血管内凝血 | 2 |
| 小儿支气管炎 | 2 |
| 肥厚型心肌病 | 2 |
| 淹溺 | 1 |
| 肺大疱 | 1 |
| 急性呼吸衰竭 | 1 |
# 附录表三:SFT_structGeneral_310849 指令跟随模板分布情况
| 指令跟随模板 | 样本数量 |
| --- | --- |
| 【治疗方案】:中药、方剂,和【诊断】:证型 | 105788 |
| 【治疗方案】:中药、方剂 | 68781 |
| 【治疗方案】:中药 | 45828 |
| 【治疗方案】:中药,和【诊断】:证型 | 38434 |
| 【治疗方案】:中药、方剂,和【诊断】:证型、治法 | 23229 |
| 【治疗方案】:中药、方剂,和【诊断】:西医诊断 | 12187 |
| 【治疗方案】:中药、方剂,和【诊断】:西医诊断、中医诊断 | 3715 |
| 【治疗方案】:中药,和【诊断】:西医诊断 | 2782 |
| 【治疗方案】:中药,和【诊断】:证型、治法 | 1909 |
| 【诊断】:证型 | 1839 |
| 【治疗方案】:中药、方剂,和【诊断】:证型、治法、中医诊断 | 1467 |
| 【治疗方案】:中药、方剂,和【诊断】:中医诊断 | 1376 |
| 【治疗方案】:中药,和【诊断】:西医诊断、中医诊断 | 1250 |
| 【治疗方案】:中药,和【诊断】:中医诊断 | 508 |
| 【诊断】:西医诊断 | 467 |
| 【治疗方案】:中药,和【诊断】:证型、治法、中医诊断 | 378 |
| 【治疗方案】:中药、方剂,和【诊断】:证型、中医诊断 | 359 |
| 【治疗方案】:中药,和【诊断】:证型、中医诊断 | 337 |
| 【诊断】:证型、治法 | 47 |
| 【诊断】:中医诊断 | 39 |
| 【诊断】:西医诊断、中医诊断 | 38 |
| 【治疗方案】:中药、成药 | 34 |
| 【治疗方案】:中药、成药、方剂 | 13 |
| 【诊断】:证型、治法、中医诊断 | 12 |
| 【治疗方案】:中药、成药,和【诊断】:证型、治法 | 10 |
| 【治疗方案】:中药、方剂,和【诊断】:治法 | 8 |
| 【治疗方案】:中药、成药,和【诊断】:西医诊断 | 6 |
| 【治疗方案】:中药、成药、方剂,和【诊断】:证型、治法 | 4 |
| 【治疗方案】:中药、方剂,和【诊断】:治法、中医诊断 | 3 |
| 【治疗方案】:中药、成药,和【诊断】:西医诊断、中医诊断 | 3 |
| 【治疗方案】:中药、成药、方剂,和【诊断】:西医诊断、中医诊断 | 2 |
| 【治疗方案】:中药、成药,和【诊断】:中医诊断 | 2 |
| 【诊断】:证型、中医诊断 | 1 |
| 【诊断】:治法 | 1 |
| 【治疗方案】:中药、成药、方剂,和【诊断】:中医诊断 | 1 |
| 【治疗方案】:中药、成药,和【诊断】:证型、治法、中医诊断 | 1 |
| 【治疗方案】:中药,和【诊断】:治法 | 1 |
# 附录四:prompt-template
```
模块目标:通过改写和优化现有的医疗问答训练数据,以面向下游任务构建指令跟随微调训练集。
输入:单个问答对,格式为单个dict,JSON格式形如:{问题: "...", 答案: "..."}
输出:多个问答对,数量为N个,格式为由N个dict的list,JSON格式形如:[{"问题": "...", "答案": "..."}, {"问题": "...", "答案": "..."}, {"问题": "...", "答案": "..."}, ...]
模块功能:
{
1. 简单输入处理:
对于简单的答案,重点在于优化答案的质量,确保自然语言流畅性。并围绕相似的答案生成不同的问题变体。输出数量:N=2。
2. 复杂输入处理:
对于复杂的答案,重点在于合理拆解答案内容,将长答案分解为多个部分。并针对每个不同的部分生成与其配套的问题。输出数量:N=4。
3. 格式要求:确保输出为合法的JSON格式,所有key和value均需用双引号包裹。字典用花括号 {},数组用方括号 [],并确保所有括号正确闭合。
4. 评估标准: 一切改写和优化应以提升下游模型在医疗垂直领域的任务上的表现为唯一评估标准。
}
``` |
bigcode/MultiPL-E-completions | bigcode | 2024-10-24T08:10:53Z | 20,206 | 9 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [] | 2023-04-22T14:01:20Z | null | ---
pretty_name: MultiPL-E generated programs and execution results
dataset_info:
features:
- name: experiment
dtype: string
- name: problem
dtype: string
- name: language
dtype: string
- name: top_p
dtype: float64
- name: max_tokens
dtype: int64
- name: prompt
dtype: string
- name: tests
dtype: string
- name: stop_tokens
sequence: string
- name: completions
sequence: string
- name: programs
sequence: string
- name: stdouts
sequence: string
- name: stderrs
sequence: string
- name: exit_codes
sequence: int64
- name: statuses
sequence: string
- name: timestamps
sequence: int64
splits:
- name: humaneval.py.bigcode_15b_800m.0.2.reworded
num_bytes: 50941974
num_examples: 161
- name: humaneval.py.bigcode_15b_200m.0.2.reworded
num_bytes: 57850786
num_examples: 161
- name: humaneval.py.bigcode_15b_400m.0.2.reworded
num_bytes: 52404545
num_examples: 161
- name: humaneval.py.bigcode_15b_600m.0.2.reworded
num_bytes: 55071293
num_examples: 161
- name: humaneval.rkt.bigcode_15b_800m.0.2.reworded
num_bytes: 77194321
num_examples: 161
- name: humaneval.py.chatgpt.0.2.reworded
num_bytes: 5473126
num_examples: 161
- name: humaneval.r.bigcode_15b_800m.0.2.reworded
num_bytes: 73160389
num_examples: 161
- name: humaneval.r.bigcode_15b_1000m.0.2.reworded
num_bytes: 63088872
num_examples: 161
- name: humaneval.r.bigcode_15b_200m.0.2.reworded
num_bytes: 77532102
num_examples: 161
- name: humaneval.r.bigcode_15b_400m.0.2.reworded
num_bytes: 80103073
num_examples: 161
- name: humaneval.r.bigcode_15b_600m.0.2.reworded
num_bytes: 76123407
num_examples: 161
- name: humaneval.py.bigcode_15b_1000m.0.2.reworded
num_bytes: 47282373
num_examples: 161
- name: humaneval.py.bigcode_15b_1000m.0.1.reworded
num_bytes: 11724023
num_examples: 161
- name: humaneval.py.bigcode_15b_1000m.0.4.reworded
num_bytes: 12167610
num_examples: 161
- name: humaneval.py.bigcode_15b_1000m.0.6.reworded
num_bytes: 12344190
num_examples: 161
- name: humaneval.py.bigcode_15b_1000m.0.8.reworded
num_bytes: 12825651
num_examples: 161
- name: humaneval.py.codegeex.0.2.reworded
num_bytes: 49547494
num_examples: 161
- name: humaneval.py.codegen.0.2.reworded
num_bytes: 55391634
num_examples: 161
- name: humaneval.py.Salesforce_codegen_16B_mono.0.2.reworded
num_bytes: 54757013
num_examples: 161
- name: humaneval.py.cushman001.0.2.reworded
num_bytes: 5081696
num_examples: 161
- name: humaneval.js.pystarcoder2.0.2.reworded
num_bytes: 6784781
num_examples: 161
- name: humaneval.py.santacoder.0.2.reworded
num_bytes: 57098223
num_examples: 161
- name: humaneval.py.santacoder_fim_prompt.0.2.reworded
num_bytes: 5473782
num_examples: 161
- name: humaneval.lua.bigcode_15b_1000m.0.2.reworded
num_bytes: 53170918
num_examples: 161
- name: mbpp.py.bigcode_15b_1000m.0.2.reworded
num_bytes: 76438518
num_examples: 397
- name: mbpp.py.pystarcoder2.0.2.reworded
num_bytes: 78424728
num_examples: 397
- name: mbpp.lua.davinci.0.2.reworded
num_bytes: 82574073
num_examples: 401
- name: humaneval.js.davinci.0.2.transform
num_bytes: 61113074
num_examples: 161
- name: humaneval.py.davinci.0.2.transform
num_bytes: 46653237
num_examples: 161
- name: humaneval.ts.davinci.0.2.transform
num_bytes: 60332695
num_examples: 159
- name: humaneval.ts.davinci.0.2.reworded
num_bytes: 60256365
num_examples: 159
- name: humaneval.cpp.davinci.0.2.reworded
num_bytes: 67044215
num_examples: 159
- name: humaneval.cs.davinci.0.2.reworded
num_bytes: 103369582
num_examples: 156
- name: humaneval.d.davinci.0.2.reworded
num_bytes: 66641398
num_examples: 156
- name: humaneval.go.davinci.0.2.reworded
num_bytes: 71050586
num_examples: 154
- name: humaneval.java.davinci.0.2.reworded
num_bytes: 71969268
num_examples: 156
- name: humaneval.jl.davinci.0.2.reworded
num_bytes: 76515733
num_examples: 159
- name: humaneval.js.davinci.0.2.reworded
num_bytes: 61674621
num_examples: 161
- name: humaneval.lua.davinci.0.2.reworded
num_bytes: 54016568
num_examples: 161
- name: humaneval.php.davinci.0.2.reworded
num_bytes: 61403511
num_examples: 161
- name: humaneval.pl.davinci.0.2.reworded
num_bytes: 60281076
num_examples: 161
- name: humaneval.py.davinci.0.2.reworded
num_bytes: 48223052
num_examples: 161
- name: humaneval.rb.davinci.0.2.reworded
num_bytes: 65236003
num_examples: 161
- name: humaneval.r.davinci.0.2.reworded
num_bytes: 48031115
num_examples: 161
- name: humaneval.rkt.davinci.0.2.reworded
num_bytes: 66273931
num_examples: 161
- name: humaneval.rs.davinci.0.2.reworded
num_bytes: 56522566
num_examples: 156
- name: humaneval.scala.davinci.0.2.reworded
num_bytes: 84153007
num_examples: 158
- name: humaneval.sh.davinci.0.2.reworded
num_bytes: 58089427
num_examples: 158
- name: humaneval.swift.davinci.0.2.reworded
num_bytes: 61663115
num_examples: 158
- name: humaneval.java.bigcode_15b_1000m.0.2.reworded
num_bytes: 75580922
num_examples: 158
- name: humaneval.java.bigcode_15b_200m.0.2.reworded
num_bytes: 84307515
num_examples: 158
- name: humaneval.java.bigcode_15b_400m.0.2.reworded
num_bytes: 85145220
num_examples: 158
- name: humaneval.java.bigcode_15b_600m.0.2.reworded
num_bytes: 80223655
num_examples: 158
- name: humaneval.java.bigcode_15b_800m.0.2.reworded
num_bytes: 78345454
num_examples: 158
- name: humaneval.java.codegeex.0.2.reworded
num_bytes: 74859734
num_examples: 158
- name: humaneval.java.codegen.0.2.reworded
num_bytes: 76705002
num_examples: 158
- name: humaneval.java.cushman001.0.2.reworded
num_bytes: 7860926
num_examples: 158
- name: humaneval.java.replit_code.0.2.reworded
num_bytes: 20396812
num_examples: 158
- name: humaneval.java.pystarcoder2.0.2.reworded
num_bytes: 73528078
num_examples: 158
- name: humaneval.java.davinci.0.2.keep
num_bytes: 71389774
num_examples: 161
- name: humaneval.java.davinci.0.2.remove
num_bytes: 64493441
num_examples: 157
- name: humaneval.java.davinci.0.2.transform
num_bytes: 72660740
num_examples: 156
- name: humaneval.java.davinci.0.8.keep
num_bytes: 72358729
num_examples: 161
- name: humaneval.java.davinci.0.8.reworded
num_bytes: 75043862
num_examples: 156
- name: humaneval.java.incoder.0.2.keep
num_bytes: 110176353
num_examples: 161
- name: humaneval.java.incoder.0.2.remove
num_bytes: 88903768
num_examples: 157
- name: humaneval.java.incoder.0.2.reworded
num_bytes: 109021885
num_examples: 156
- name: humaneval.java.incoder.0.2.transform
num_bytes: 107183302
num_examples: 156
- name: humaneval.java.incoder.0.8.keep
num_bytes: 75299144
num_examples: 139
- name: humaneval.java.incoder.0.8.reworded
num_bytes: 100533855
num_examples: 158
- name: mbpp.java.codegen.0.2.reworded
num_bytes: 144592215
num_examples: 373
- name: mbpp.java.codegen.0.8.reworded
num_bytes: 47521423
num_examples: 120
- name: mbpp.java.davinci.0.2.keep
num_bytes: 135567713
num_examples: 373
- name: mbpp.java.davinci.0.2.reworded
num_bytes: 136848151
num_examples: 373
- name: mbpp.java.incoder.0.2.reworded
num_bytes: 225046095
num_examples: 373
- name: mbpp.java.incoder.0.8.reworded
num_bytes: 167923488
num_examples: 373
- name: humaneval.cpp.davinci.0.2.keep
num_bytes: 63794632
num_examples: 161
- name: humaneval.cpp.davinci.0.2.remove
num_bytes: 58355394
num_examples: 158
- name: humaneval.cpp.davinci.0.2.transform
num_bytes: 66852210
num_examples: 159
- name: humaneval.cpp.davinci.0.8.keep
num_bytes: 61668425
num_examples: 161
- name: humaneval.cpp.bigcode_15b_1000m.0.2.reworded
num_bytes: 67353068
num_examples: 161
- name: humaneval.cpp.bigcode_15b_200m.0.2.reworded
num_bytes: 73914809
num_examples: 161
- name: humaneval.cpp.bigcode_15b_400m.0.2.reworded
num_bytes: 68514672
num_examples: 161
- name: humaneval.cpp.bigcode_15b_600m.0.2.reworded
num_bytes: 70059227
num_examples: 161
- name: humaneval.cpp.bigcode_15b_800m.0.2.reworded
num_bytes: 69289473
num_examples: 161
- name: humaneval.cpp.codegeex.0.2.reworded
num_bytes: 70250543
num_examples: 161
- name: humaneval.cpp.codegen.0.2.reworded
num_bytes: 65355449
num_examples: 161
- name: humaneval.cpp.cushman001.0.2.reworded
num_bytes: 6878097
num_examples: 161
- name: humaneval.cpp.replit_code.0.2.reworded
num_bytes: 18647873
num_examples: 161
- name: humaneval.cs.bigcode_15b_1000m.0.2.reworded
num_bytes: 115265463
num_examples: 158
- name: humaneval.cs.bigcode_15b_200m.0.2.reworded
num_bytes: 128116325
num_examples: 158
- name: humaneval.cs.bigcode_15b_400m.0.2.reworded
num_bytes: 116443233
num_examples: 158
- name: humaneval.cs.bigcode_15b_600m.0.2.reworded
num_bytes: 110736924
num_examples: 158
- name: humaneval.cs.bigcode_15b_800m.0.2.reworded
num_bytes: 116921504
num_examples: 158
- name: humaneval.cs.codegeex.0.2.reworded
num_bytes: 108831398
num_examples: 158
- name: humaneval.cs.codegen.0.2.reworded
num_bytes: 115085420
num_examples: 158
- name: humaneval.cs.cushman001.0.2.reworded
num_bytes: 11455476
num_examples: 158
- name: humaneval.cs.replit_code.0.2.reworded
num_bytes: 29978496
num_examples: 158
- name: humaneval.d.bigcode_15b_1000m.0.2.reworded
num_bytes: 69856838
num_examples: 156
- name: humaneval.d.bigcode_15b_200m.0.2.reworded
num_bytes: 69168908
num_examples: 156
- name: humaneval.d.bigcode_15b_400m.0.2.reworded
num_bytes: 66130665
num_examples: 156
- name: humaneval.d.bigcode_15b_600m.0.2.reworded
num_bytes: 60081870
num_examples: 156
- name: humaneval.d.bigcode_15b_800m.0.2.reworded
num_bytes: 68285500
num_examples: 156
- name: humaneval.d.codegeex.0.2.reworded
num_bytes: 67554723
num_examples: 156
- name: humaneval.d.codegen.0.2.reworded
num_bytes: 69538065
num_examples: 156
- name: humaneval.d.cushman001.0.2.reworded
num_bytes: 6543145
num_examples: 156
- name: humaneval.d.replit_code.0.2.reworded
num_bytes: 19332975
num_examples: 156
- name: humaneval.go.bigcode_15b_1000m.0.2.reworded
num_bytes: 75966586
num_examples: 154
- name: humaneval.go.bigcode_15b_200m.0.2.reworded
num_bytes: 90496893
num_examples: 154
- name: humaneval.go.bigcode_15b_400m.0.2.reworded
num_bytes: 80263304
num_examples: 154
- name: humaneval.go.bigcode_15b_600m.0.2.reworded
num_bytes: 80653936
num_examples: 154
- name: humaneval.go.bigcode_15b_800m.0.2.reworded
num_bytes: 79636433
num_examples: 154
- name: humaneval.go.codegeex.0.2.reworded
num_bytes: 74466402
num_examples: 154
- name: humaneval.go.codegen.0.2.reworded
num_bytes: 82565036
num_examples: 154
- name: humaneval.go.cushman001.0.2.reworded
num_bytes: 7919252
num_examples: 154
- name: humaneval.go.replit_code.0.2.reworded
num_bytes: 21740421
num_examples: 154
- name: humaneval.jl.bigcode_15b_1000m.0.2.reworded
num_bytes: 64541752
num_examples: 159
- name: humaneval.jl.bigcode_15b_200m.0.2.reworded
num_bytes: 64272523
num_examples: 159
- name: humaneval.jl.bigcode_15b_400m.0.2.reworded
num_bytes: 84674386
num_examples: 159
- name: humaneval.jl.bigcode_15b_600m.0.2.reworded
num_bytes: 83951098
num_examples: 159
- name: humaneval.jl.bigcode_15b_800m.0.2.reworded
num_bytes: 71891875
num_examples: 159
- name: humaneval.jl.codegeex.0.2.reworded
num_bytes: 49376484
num_examples: 159
- name: humaneval.jl.codegen.0.2.reworded
num_bytes: 49686685
num_examples: 159
- name: humaneval.jl.cushman001.0.2.reworded
num_bytes: 5594623
num_examples: 159
- name: humaneval.jl.replit_code.0.2.reworded
num_bytes: 12432167
num_examples: 159
- name: humaneval.js.bigcode_15b_1000m.0.2.reworded
num_bytes: 63930510
num_examples: 161
- name: humaneval.js.bigcode_15b_200m.0.2.reworded
num_bytes: 71006276
num_examples: 161
- name: humaneval.js.bigcode_15b_400m.0.2.reworded
num_bytes: 71141641
num_examples: 161
- name: humaneval.js.bigcode_15b_600m.0.2.reworded
num_bytes: 66406645
num_examples: 161
- name: humaneval.js.bigcode_15b_800m.0.2.reworded
num_bytes: 65906688
num_examples: 161
- name: humaneval.js.codegeex.0.2.reworded
num_bytes: 68965171
num_examples: 161
- name: humaneval.js.codegen.0.2.reworded
num_bytes: 71850674
num_examples: 161
- name: humaneval.js.cushman001.0.2.reworded
num_bytes: 6756809
num_examples: 161
- name: humaneval.js.replit_code.0.2.reworded
num_bytes: 20658701
num_examples: 161
- name: humaneval.lua.bigcode_15b_200m.0.2.reworded
num_bytes: 56733662
num_examples: 161
- name: humaneval.lua.bigcode_15b_400m.0.2.reworded
num_bytes: 57525953
num_examples: 161
- name: humaneval.lua.bigcode_15b_600m.0.2.reworded
num_bytes: 53575875
num_examples: 161
- name: humaneval.lua.bigcode_15b_800m.0.2.reworded
num_bytes: 54309789
num_examples: 161
- name: humaneval.lua.codegeex.0.2.reworded
num_bytes: 53766400
num_examples: 161
- name: humaneval.lua.codegen.0.2.reworded
num_bytes: 63642889
num_examples: 161
- name: humaneval.lua.cushman001.0.2.reworded
num_bytes: 5726991
num_examples: 161
- name: humaneval.lua.replit_code.0.2.reworded
num_bytes: 14458988
num_examples: 161
- name: humaneval.php.bigcode_15b_1000m.0.2.reworded
num_bytes: 62087493
num_examples: 161
- name: humaneval.php.bigcode_15b_200m.0.2.reworded
num_bytes: 67992787
num_examples: 161
- name: humaneval.php.bigcode_15b_400m.0.2.reworded
num_bytes: 65415347
num_examples: 161
- name: humaneval.php.bigcode_15b_600m.0.2.reworded
num_bytes: 64025429
num_examples: 161
- name: humaneval.php.bigcode_15b_800m.0.2.reworded
num_bytes: 67914229
num_examples: 161
- name: humaneval.php.codegeex.0.2.reworded
num_bytes: 63599818
num_examples: 161
- name: humaneval.php.codegen.0.2.reworded
num_bytes: 71759630
num_examples: 161
- name: humaneval.php.cushman001.0.2.reworded
num_bytes: 6680669
num_examples: 161
- name: humaneval.php.replit_code.0.2.reworded
num_bytes: 18347062
num_examples: 161
- name: humaneval.pl.bigcode_15b_1000m.0.2.reworded
num_bytes: 69839042
num_examples: 161
- name: humaneval.pl.bigcode_15b_200m.0.2.reworded
num_bytes: 79671308
num_examples: 161
- name: humaneval.pl.bigcode_15b_400m.0.2.reworded
num_bytes: 78788842
num_examples: 161
- name: humaneval.pl.bigcode_15b_600m.0.2.reworded
num_bytes: 69916889
num_examples: 161
- name: humaneval.pl.bigcode_15b_800m.0.2.reworded
num_bytes: 73552220
num_examples: 161
- name: humaneval.pl.codegeex.0.2.reworded
num_bytes: 72617126
num_examples: 161
- name: humaneval.pl.codegen.0.2.reworded
num_bytes: 74351768
num_examples: 161
- name: humaneval.pl.cushman001.0.2.reworded
num_bytes: 7317844
num_examples: 161
- name: humaneval.pl.replit_code.0.2.reworded
num_bytes: 23014112
num_examples: 161
- name: humaneval.py.bigcode_15b_200m.0.8.reworded
num_bytes: 55679581
num_examples: 161
- name: humaneval.py.bigcode_15b_400m.0.8.reworded
num_bytes: 49813429
num_examples: 161
- name: humaneval.py.replit_code.0.2.reworded
num_bytes: 16222771
num_examples: 161
- name: humaneval.py.starcoder.0.8.reworded
num_bytes: 50428866
num_examples: 161
- name: humaneval.py.starcoderprompted.0.1.reworded
num_bytes: 53971758
num_examples: 161
- name: humaneval.rb.bigcode_15b_1000m.0.2.reworded
num_bytes: 67446763
num_examples: 161
- name: humaneval.rb.bigcode_15b_200m.0.2.reworded
num_bytes: 70571683
num_examples: 161
- name: humaneval.rb.bigcode_15b_400m.0.2.reworded
num_bytes: 67565830
num_examples: 161
- name: humaneval.rb.bigcode_15b_600m.0.2.reworded
num_bytes: 71419194
num_examples: 161
- name: humaneval.rb.bigcode_15b_800m.0.2.reworded
num_bytes: 69995749
num_examples: 161
- name: humaneval.rb.codegeex.0.2.reworded
num_bytes: 63388920
num_examples: 161
- name: humaneval.rb.codegen.0.2.reworded
num_bytes: 68918022
num_examples: 161
- name: humaneval.rb.cushman001.0.2.reworded
num_bytes: 7084615
num_examples: 161
- name: humaneval.rb.replit_code.0.2.reworded
num_bytes: 17797810
num_examples: 161
- name: humaneval.r.codegeex.0.2.reworded
num_bytes: 64172735
num_examples: 161
- name: humaneval.r.codegen.0.2.reworded
num_bytes: 75777642
num_examples: 161
- name: humaneval.r.cushman001.0.2.reworded
num_bytes: 6509329
num_examples: 161
- name: humaneval.rkt.bigcode_15b_1000m.0.2.reworded
num_bytes: 71049799
num_examples: 161
- name: humaneval.rkt.bigcode_15b_200m.0.2.reworded
num_bytes: 72642020
num_examples: 161
- name: humaneval.rkt.bigcode_15b_400m.0.2.reworded
num_bytes: 73564249
num_examples: 161
- name: humaneval.rkt.bigcode_15b_600m.0.2.reworded
num_bytes: 73730273
num_examples: 161
- name: humaneval.rkt.codegeex.0.2.reworded
num_bytes: 70940774
num_examples: 161
- name: humaneval.rkt.codegen.0.2.reworded
num_bytes: 90161741
num_examples: 161
- name: humaneval.rkt.cushman001.0.2.reworded
num_bytes: 6030454
num_examples: 161
- name: humaneval.rkt.replit_code.0.2.reworded
num_bytes: 18423402
num_examples: 161
- name: humaneval.r.replit_code.0.2.reworded
num_bytes: 19677779
num_examples: 161
- name: humaneval.rs.bigcode_15b_1000m.0.2.reworded
num_bytes: 59702550
num_examples: 156
- name: humaneval.rs.bigcode_15b_200m.0.2.reworded
num_bytes: 67759741
num_examples: 156
- name: humaneval.rs.bigcode_15b_400m.0.2.reworded
num_bytes: 68044357
num_examples: 156
- name: humaneval.rs.bigcode_15b_600m.0.2.reworded
num_bytes: 54658037
num_examples: 156
- name: humaneval.rs.bigcode_15b_800m.0.2.reworded
num_bytes: 62854949
num_examples: 156
- name: humaneval.rs.codegeex.0.2.reworded
num_bytes: 67976569
num_examples: 156
- name: humaneval.rs.codegen.0.2.reworded
num_bytes: 82843583
num_examples: 156
- name: humaneval.rs.cushman001.0.2.reworded
num_bytes: 6310774
num_examples: 156
- name: humaneval.rs.replit_code.0.2.reworded
num_bytes: 17624999
num_examples: 156
- name: humaneval.scala.bigcode_15b_1000m.0.2.reworded
num_bytes: 84533532
num_examples: 160
- name: humaneval.scala.bigcode_15b_800m.0.2.reworded
num_bytes: 87023720
num_examples: 160
- name: humaneval.scala.codegeex.0.2.reworded
num_bytes: 79619828
num_examples: 160
- name: humaneval.scala.codegen.0.2.reworded
num_bytes: 128885303
num_examples: 160
- name: humaneval.scala.cushman001.0.2.reworded
num_bytes: 8500865
num_examples: 160
- name: humaneval.scala.replit_code.0.2.reworded
num_bytes: 22458222
num_examples: 160
- name: humaneval.sh.bigcode_15b_1000m.0.2.reworded
num_bytes: 62768941
num_examples: 158
- name: humaneval.sh.bigcode_15b_200m.0.2.reworded
num_bytes: 75630478
num_examples: 158
- name: humaneval.sh.bigcode_15b_400m.0.2.reworded
num_bytes: 77050658
num_examples: 158
- name: humaneval.sh.bigcode_15b_600m.0.2.reworded
num_bytes: 65325746
num_examples: 158
- name: humaneval.swift.bigcode_15b_600m.0.2.reworded
num_bytes: 70424335
num_examples: 158
- name: humaneval.scala.bigcode_15b_200m.0.2.reworded
num_bytes: 89054581
num_examples: 160
- name: humaneval.scala.bigcode_15b_400m.0.2.reworded
num_bytes: 83343360
num_examples: 160
- name: humaneval.scala.bigcode_15b_600m.0.2.reworded
num_bytes: 89752223
num_examples: 160
- name: humaneval.sh.bigcode_15b_800m.0.2.reworded
num_bytes: 66811937
num_examples: 158
- name: humaneval.sh.codegeex.0.2.reworded
num_bytes: 65196768
num_examples: 158
- name: humaneval.sh.codegen.0.2.reworded
num_bytes: 99280481
num_examples: 158
- name: humaneval.sh.cushman001.0.2.reworded
num_bytes: 6237965
num_examples: 158
- name: humaneval.sh.replit_code.0.2.reworded
num_bytes: 18134838
num_examples: 158
- name: humaneval.swift.bigcode_15b_1000m.0.2.reworded
num_bytes: 68129948
num_examples: 158
- name: humaneval.swift.bigcode_15b_200m.0.2.reworded
num_bytes: 76924134
num_examples: 158
- name: humaneval.swift.bigcode_15b_400m.0.2.reworded
num_bytes: 72042977
num_examples: 158
- name: humaneval.swift.bigcode_15b_800m.0.2.reworded
num_bytes: 70027106
num_examples: 158
- name: humaneval.swift.codegeex.0.2.reworded
num_bytes: 73605273
num_examples: 158
- name: humaneval.swift.codegen.0.2.reworded
num_bytes: 76081675
num_examples: 158
- name: humaneval.swift.cushman001.0.2.reworded
num_bytes: 6766506
num_examples: 158
- name: humaneval.swift.replit_code.0.2.reworded
num_bytes: 21605861
num_examples: 158
- name: humaneval.ts.bigcode_15b_1000m.0.2.reworded
num_bytes: 61005831
num_examples: 159
- name: humaneval.ts.bigcode_15b_200m.0.2.reworded
num_bytes: 68875546
num_examples: 159
- name: humaneval.ts.bigcode_15b_400m.0.2.reworded
num_bytes: 62805583
num_examples: 159
- name: humaneval.ts.bigcode_15b_600m.0.2.reworded
num_bytes: 53733690
num_examples: 159
- name: humaneval.ts.bigcode_15b_800m.0.2.reworded
num_bytes: 64371975
num_examples: 159
- name: humaneval.ts.codegeex.0.2.reworded
num_bytes: 58487751
num_examples: 159
- name: humaneval.ts.codegen.0.2.reworded
num_bytes: 69981611
num_examples: 159
- name: humaneval.ts.cushman001.0.2.reworded
num_bytes: 6768589
num_examples: 159
- name: humaneval.ts.replit_code.0.2.reworded
num_bytes: 18208741
num_examples: 159
- name: mbpp.py.codegeex.0.2.reworded
num_bytes: 86906502
num_examples: 397
- name: mbpp.py.codegen.0.2.reworded
num_bytes: 92562493
num_examples: 397
- name: mbpp.py.cushman001.0.2.reworded
num_bytes: 7629346
num_examples: 397
- name: mbpp.py.pystarcoder2.0.1.reworded
num_bytes: 142442817
num_examples: 397
- name: mbpp.py.Salesforce_codegen_16B_mono.0.2.reworded
num_bytes: 86067040
num_examples: 397
- name: humaneval.matlab.keep.gpt_35_turbo_0301.0.2.reworded
num_bytes: 4837906
num_examples: 161
- name: humaneval.cpp.codellama_13b_base.0.2.reworded
num_bytes: 17224400
num_examples: 161
- name: humaneval.cpp.codellama_7b_base.0.2.reworded
num_bytes: 16947382
num_examples: 161
- name: humaneval.cpp.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 17349817
num_examples: 161
- name: humaneval.cpp.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 16452450
num_examples: 161
- name: humaneval.cpp.stablecode3b.0.2.reworded
num_bytes: 67319279
num_examples: 161
- name: humaneval.cpp.StarCoder2_15b_16k.0.2.reworded
num_bytes: 16464290
num_examples: 161
- name: humaneval.cpp.starcoder2_3b_long.0.2.reworded
num_bytes: 6912758
num_examples: 161
- name: humaneval.cpp.StarCoder2_7b_16k.0.2.reworded
num_bytes: 16812656
num_examples: 161
- name: humaneval.cs.codellama_13b_base.0.2.reworded
num_bytes: 27515677
num_examples: 158
- name: humaneval.cs.CodeLlama_34b_base.0.2.reworded
num_bytes: 27108848
num_examples: 158
- name: humaneval.cs.codellama_7b_base.0.2.reworded
num_bytes: 25063010
num_examples: 158
- name: humaneval.cs.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 29224686
num_examples: 158
- name: humaneval.cs.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 26768709
num_examples: 158
- name: humaneval.cs.DeepSeekCoder_34b_base.0.2.reworded
num_bytes: 25747311
num_examples: 158
- name: humaneval.cs.stablecode3b.0.2.reworded
num_bytes: 105810688
num_examples: 158
- name: humaneval.cs.StarCoder2_15b_16k.0.2.reworded
num_bytes: 24656854
num_examples: 158
- name: humaneval.cs.starcoder2_3b_long.0.2.reworded
num_bytes: 10211975
num_examples: 158
- name: humaneval.cs.StarCoder2_7b_16k.0.2.reworded
num_bytes: 24549204
num_examples: 158
- name: humaneval.cs.starcoderbase_3b.0.2.reworded
num_bytes: 30212965
num_examples: 158
- name: humaneval.cs.starcoderbase_7b.0.2.reworded
num_bytes: 29671445
num_examples: 158
- name: humaneval.d.codellama_13b_base.0.2.reworded
num_bytes: 16540135
num_examples: 156
- name: humaneval.d.codellama_7b_base.0.2.reworded
num_bytes: 16378561
num_examples: 156
- name: humaneval.d.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 16380578
num_examples: 156
- name: humaneval.d.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 15678858
num_examples: 156
- name: humaneval.d.stablecode3b.0.2.reworded
num_bytes: 35933717
num_examples: 92
- name: humaneval.d.StarCoder2_15b_16k.0.2.reworded
num_bytes: 15488865
num_examples: 156
- name: humaneval.d.starcoder2_3b_long.0.2.reworded
num_bytes: 6448842
num_examples: 156
- name: humaneval.d.StarCoder2_7b_16k.0.2.reworded
num_bytes: 14879550
num_examples: 156
- name: humaneval.go.codellama_13b_base.0.2.reworded
num_bytes: 18624670
num_examples: 154
- name: humaneval.go.CodeLlama_34b_base.0.2.reworded
num_bytes: 18348739
num_examples: 154
- name: humaneval.go.codellama_7b_base.0.2.reworded
num_bytes: 18328204
num_examples: 154
- name: humaneval.go.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 18484006
num_examples: 154
- name: humaneval.go.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 18461448
num_examples: 154
- name: humaneval.go.DeepSeekCoder_34b_base.0.2.reworded
num_bytes: 17594569
num_examples: 154
- name: humaneval.go.stablecode3b.0.2.reworded
num_bytes: 76254627
num_examples: 154
- name: humaneval.go.StarCoder2_15b_16k.0.2.reworded
num_bytes: 17439839
num_examples: 154
- name: humaneval.go.starcoder2_3b_long.0.2.reworded
num_bytes: 7602923
num_examples: 154
- name: humaneval.go.StarCoder2_7b_16k.0.2.reworded
num_bytes: 17408959
num_examples: 154
- name: humaneval.go.starcoderbase_3b.0.2.reworded
num_bytes: 21037781
num_examples: 154
- name: humaneval.go.starcoderbase_7b.0.2.reworded
num_bytes: 19796229
num_examples: 154
- name: humaneval.java.codellama_13b_base.0.2.reworded
num_bytes: 19317231
num_examples: 158
- name: humaneval.java.codellama_7b_base.0.2.reworded
num_bytes: 18319565
num_examples: 158
- name: humaneval.java.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 19864347
num_examples: 158
- name: humaneval.java.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 18625198
num_examples: 158
- name: humaneval.java.stablecode3b.0.2.reworded
num_bytes: 74579340
num_examples: 158
- name: humaneval.java.StarCoder2_15b_16k.0.2.reworded
num_bytes: 17514550
num_examples: 158
- name: humaneval.java.starcoder2_3b_long.0.2.reworded
num_bytes: 7463704
num_examples: 158
- name: humaneval.java.StarCoder2_7b_16k.0.2.reworded
num_bytes: 18302479
num_examples: 158
- name: humaneval.jl.codellama_13b_base.0.2.reworded
num_bytes: 19477558
num_examples: 159
- name: humaneval.jl.codellama_7b_base.0.2.reworded
num_bytes: 20001814
num_examples: 159
- name: humaneval.jl.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 22131151
num_examples: 159
- name: humaneval.jl.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 18334155
num_examples: 159
- name: humaneval.jl.stablecode3b.0.2.reworded
num_bytes: 82666454
num_examples: 159
- name: humaneval.jl.StarCoder2_15b_16k.0.2.reworded
num_bytes: 19036610
num_examples: 159
- name: humaneval.jl.starcoder2_3b_long.0.2.reworded
num_bytes: 8334068
num_examples: 159
- name: humaneval.jl.StarCoder2_7b_16k.0.2.reworded
num_bytes: 20931800
num_examples: 159
- name: humaneval.js.codellama_13b_base.0.2.reworded
num_bytes: 16473024
num_examples: 161
- name: humaneval.js.codellama_7b_base.0.2.reworded
num_bytes: 16582420
num_examples: 161
- name: humaneval.js.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 16716270
num_examples: 161
- name: humaneval.js.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 15173546
num_examples: 161
- name: humaneval.js.stablecode3b.0.2.reworded
num_bytes: 64385566
num_examples: 161
- name: humaneval.js.StarCoder2_15b_16k.0.2.reworded
num_bytes: 15300799
num_examples: 161
- name: humaneval.js.starcoder2_3b_long.0.2.reworded
num_bytes: 6351328
num_examples: 161
- name: humaneval.js.StarCoder2_7b_16k.0.2.reworded
num_bytes: 15697228
num_examples: 161
- name: humaneval.lua.codellama_13b_base.0.2.reworded
num_bytes: 13273956
num_examples: 161
- name: humaneval.lua.codellama_7b_base.0.2.reworded
num_bytes: 13559092
num_examples: 161
- name: humaneval.lua.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 14465897
num_examples: 161
- name: humaneval.lua.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 13708591
num_examples: 161
- name: humaneval.lua.stablecode3b.0.2.reworded
num_bytes: 56129300
num_examples: 161
- name: humaneval.lua.StarCoder2_15b_16k.0.2.reworded
num_bytes: 13667740
num_examples: 161
- name: humaneval.lua.starcoder2_3b_long.0.2.reworded
num_bytes: 5510129
num_examples: 161
- name: humaneval.lua.StarCoder2_7b_16k.0.2.reworded
num_bytes: 13085989
num_examples: 161
- name: humaneval.php.codellama_13b_base.0.2.reworded
num_bytes: 15705506
num_examples: 161
- name: humaneval.php.codellama_7b_base.0.2.reworded
num_bytes: 15787570
num_examples: 161
- name: humaneval.php.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 15814055
num_examples: 161
- name: humaneval.php.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 15702317
num_examples: 161
- name: humaneval.php.stablecode3b.0.2.reworded
num_bytes: 62279235
num_examples: 161
- name: humaneval.php.StarCoder2_15b_16k.0.2.reworded
num_bytes: 15762455
num_examples: 161
- name: humaneval.php.starcoder2_3b_long.0.2.reworded
num_bytes: 6256161
num_examples: 161
- name: humaneval.php.StarCoder2_7b_16k.0.2.reworded
num_bytes: 15173539
num_examples: 161
- name: humaneval.pl.codellama_13b_base.0.2.reworded
num_bytes: 18073447
num_examples: 161
- name: humaneval.pl.CodeLlama_34b_base.0.2.reworded
num_bytes: 17163359
num_examples: 161
- name: humaneval.pl.codellama_7b_base.0.2.reworded
num_bytes: 17854674
num_examples: 161
- name: humaneval.pl.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 18760773
num_examples: 161
- name: humaneval.pl.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 17873165
num_examples: 161
- name: humaneval.pl.DeepSeekCoder_34b_base.0.2.reworded
num_bytes: 17282729
num_examples: 161
- name: humaneval.pl.stablecode3b.0.2.reworded
num_bytes: 71926624
num_examples: 161
- name: humaneval.pl.StarCoder2_15b_16k.0.2.reworded
num_bytes: 17260449
num_examples: 161
- name: humaneval.pl.starcoder2_3b_long.0.2.reworded
num_bytes: 7323910
num_examples: 161
- name: humaneval.pl.StarCoder2_7b_16k.0.2.reworded
num_bytes: 17386798
num_examples: 161
- name: humaneval.pl.starcoderbase_3b.0.2.reworded
num_bytes: 17425724
num_examples: 161
- name: humaneval.pl.starcoderbase_7b.0.2.reworded
num_bytes: 17232522
num_examples: 161
- name: humaneval.rb.codellama_13b_base.0.2.reworded
num_bytes: 16924279
num_examples: 161
- name: humaneval.rb.CodeLlama_34b_base.0.2.reworded
num_bytes: 16076508
num_examples: 161
- name: humaneval.rb.codellama_7b_base.0.2.reworded
num_bytes: 17352418
num_examples: 161
- name: humaneval.rb.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 17880997
num_examples: 161
- name: humaneval.rb.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 16637852
num_examples: 161
- name: humaneval.rb.DeepSeekCoder_34b_base.0.2.reworded
num_bytes: 15774077
num_examples: 161
- name: humaneval.rb.stablecode3b.0.2.reworded
num_bytes: 67134234
num_examples: 161
- name: humaneval.rb.StarCoder2_15b_16k.0.2.reworded
num_bytes: 16344062
num_examples: 161
- name: humaneval.rb.starcoder2_3b_long.0.2.reworded
num_bytes: 6938906
num_examples: 161
- name: humaneval.rb.StarCoder2_7b_16k.0.2.reworded
num_bytes: 16973867
num_examples: 161
- name: humaneval.rb.starcoderbase_3b.0.2.reworded
num_bytes: 17503070
num_examples: 161
- name: humaneval.rb.starcoderbase_7b.0.2.reworded
num_bytes: 17444427
num_examples: 161
- name: humaneval.r.codellama_13b_base.0.2.reworded
num_bytes: 16765203
num_examples: 161
- name: humaneval.r.codellama_7b_base.0.2.reworded
num_bytes: 16042879
num_examples: 161
- name: humaneval.r.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 18188961
num_examples: 161
- name: humaneval.r.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 15927073
num_examples: 161
- name: humaneval.r.DeepSeekCoder_34b_base.0.2.reworded
num_bytes: 11519925
num_examples: 161
- name: humaneval.rkt.codellama_13b_base.0.2.reworded
num_bytes: 17815474
num_examples: 161
- name: humaneval.rkt.codellama_7b_base.0.2.reworded
num_bytes: 17859177
num_examples: 161
- name: humaneval.rkt.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 17714145
num_examples: 161
- name: humaneval.rkt.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 17785261
num_examples: 161
- name: humaneval.rkt.stablecode3b.0.2.reworded
num_bytes: 70190960
num_examples: 161
- name: humaneval.rkt.StarCoder2_15b_16k.0.2.reworded
num_bytes: 16095558
num_examples: 161
- name: humaneval.rkt.starcoder2_3b_long.0.2.reworded
num_bytes: 7229090
num_examples: 161
- name: humaneval.rkt.StarCoder2_7b_16k.0.2.reworded
num_bytes: 16284554
num_examples: 161
- name: humaneval.rs.codellama_13b_base.0.2.reworded
num_bytes: 15195007
num_examples: 156
- name: humaneval.rs.codellama_7b_base.0.2.reworded
num_bytes: 15714251
num_examples: 156
- name: humaneval.rs.deepseekcoder_1.3b_base.0.2.reworded
num_bytes: 15792067
num_examples: 156
- name: humaneval.rs.deepseekcoder1.5_7b_base.0.2.reworded
num_bytes: 14351037
num_examples: 156
- name: humaneval.rs.stablecode3b.0.2.reworded
num_bytes: 61739739
num_examples: 156
- name: humaneval.rs.StarCoder2_15b_16k.0.2.reworded
num_bytes: 14340153
num_examples: 156
- name: humaneval.rs.starcoder2_3b_long.0.2.reworded
num_bytes: 6139379
num_examples: 156
- name: humaneval.rs.StarCoder2_7b_16k.0.2.reworded
num_bytes: 14671151
num_examples: 156
- name: humaneval.r.stablecode3b.0.2.reworded
num_bytes: 62027260
num_examples: 161
- name: humaneval.r.StarCoder2_15b_16k.0.2.reworded
num_bytes: 14198671
num_examples: 161
- name: humaneval.r.starcoder2_3b_long.0.2.reworded
num_bytes: 6471459
num_examples: 161
download_size: 1518755447
dataset_size: 17591575904
tags:
- code
configs:
- config_name: default
data_files:
- split: humaneval.cpp.codellama_13b_base.0.2.reworded
path: data/humaneval.cpp.codellama_13b_base.0.2.reworded-*
- split: humaneval.cpp.codellama_7b_base.0.2.reworded
path: data/humaneval.cpp.codellama_7b_base.0.2.reworded-*
- split: humaneval.cpp.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.cpp.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.cpp.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.cpp.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.cpp.stablecode3b.0.2.reworded
path: data/humaneval.cpp.stablecode3b.0.2.reworded-*
- split: humaneval.cpp.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.cpp.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.cpp.starcoder2_3b_long.0.2.reworded
path: data/humaneval.cpp.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.cpp.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.cpp.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.cs.codellama_13b_base.0.2.reworded
path: data/humaneval.cs.codellama_13b_base.0.2.reworded-*
- split: humaneval.cs.CodeLlama_34b_base.0.2.reworded
path: data/humaneval.cs.CodeLlama_34b_base.0.2.reworded-*
- split: humaneval.cs.codellama_7b_base.0.2.reworded
path: data/humaneval.cs.codellama_7b_base.0.2.reworded-*
- split: humaneval.cs.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.cs.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.cs.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.cs.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.cs.DeepSeekCoder_34b_base.0.2.reworded
path: data/humaneval.cs.DeepSeekCoder_34b_base.0.2.reworded-*
- split: humaneval.cs.stablecode3b.0.2.reworded
path: data/humaneval.cs.stablecode3b.0.2.reworded-*
- split: humaneval.cs.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.cs.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.cs.starcoder2_3b_long.0.2.reworded
path: data/humaneval.cs.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.cs.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.cs.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.cs.starcoderbase_3b.0.2.reworded
path: data/humaneval.cs.starcoderbase_3b.0.2.reworded-*
- split: humaneval.cs.starcoderbase_7b.0.2.reworded
path: data/humaneval.cs.starcoderbase_7b.0.2.reworded-*
- split: humaneval.d.codellama_13b_base.0.2.reworded
path: data/humaneval.d.codellama_13b_base.0.2.reworded-*
- split: humaneval.d.codellama_7b_base.0.2.reworded
path: data/humaneval.d.codellama_7b_base.0.2.reworded-*
- split: humaneval.d.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.d.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.d.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.d.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.d.stablecode3b.0.2.reworded
path: data/humaneval.d.stablecode3b.0.2.reworded-*
- split: humaneval.d.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.d.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.d.starcoder2_3b_long.0.2.reworded
path: data/humaneval.d.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.d.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.d.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.go.codellama_13b_base.0.2.reworded
path: data/humaneval.go.codellama_13b_base.0.2.reworded-*
- split: humaneval.go.CodeLlama_34b_base.0.2.reworded
path: data/humaneval.go.CodeLlama_34b_base.0.2.reworded-*
- split: humaneval.go.codellama_7b_base.0.2.reworded
path: data/humaneval.go.codellama_7b_base.0.2.reworded-*
- split: humaneval.go.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.go.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.go.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.go.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.go.DeepSeekCoder_34b_base.0.2.reworded
path: data/humaneval.go.DeepSeekCoder_34b_base.0.2.reworded-*
- split: humaneval.go.stablecode3b.0.2.reworded
path: data/humaneval.go.stablecode3b.0.2.reworded-*
- split: humaneval.go.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.go.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.go.starcoder2_3b_long.0.2.reworded
path: data/humaneval.go.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.go.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.go.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.go.starcoderbase_3b.0.2.reworded
path: data/humaneval.go.starcoderbase_3b.0.2.reworded-*
- split: humaneval.go.starcoderbase_7b.0.2.reworded
path: data/humaneval.go.starcoderbase_7b.0.2.reworded-*
- split: humaneval.java.codellama_13b_base.0.2.reworded
path: data/humaneval.java.codellama_13b_base.0.2.reworded-*
- split: humaneval.java.codellama_7b_base.0.2.reworded
path: data/humaneval.java.codellama_7b_base.0.2.reworded-*
- split: humaneval.java.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.java.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.java.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.java.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.java.stablecode3b.0.2.reworded
path: data/humaneval.java.stablecode3b.0.2.reworded-*
- split: humaneval.java.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.java.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.java.starcoder2_3b_long.0.2.reworded
path: data/humaneval.java.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.java.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.java.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.jl.codellama_13b_base.0.2.reworded
path: data/humaneval.jl.codellama_13b_base.0.2.reworded-*
- split: humaneval.jl.codellama_7b_base.0.2.reworded
path: data/humaneval.jl.codellama_7b_base.0.2.reworded-*
- split: humaneval.jl.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.jl.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.jl.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.jl.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.jl.stablecode3b.0.2.reworded
path: data/humaneval.jl.stablecode3b.0.2.reworded-*
- split: humaneval.jl.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.jl.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.jl.starcoder2_3b_long.0.2.reworded
path: data/humaneval.jl.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.jl.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.jl.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.js.codellama_13b_base.0.2.reworded
path: data/humaneval.js.codellama_13b_base.0.2.reworded-*
- split: humaneval.js.codellama_7b_base.0.2.reworded
path: data/humaneval.js.codellama_7b_base.0.2.reworded-*
- split: humaneval.js.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.js.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.js.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.js.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.js.stablecode3b.0.2.reworded
path: data/humaneval.js.stablecode3b.0.2.reworded-*
- split: humaneval.js.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.js.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.js.starcoder2_3b_long.0.2.reworded
path: data/humaneval.js.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.js.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.js.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.lua.codellama_13b_base.0.2.reworded
path: data/humaneval.lua.codellama_13b_base.0.2.reworded-*
- split: humaneval.lua.codellama_7b_base.0.2.reworded
path: data/humaneval.lua.codellama_7b_base.0.2.reworded-*
- split: humaneval.lua.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.lua.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.lua.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.lua.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.lua.stablecode3b.0.2.reworded
path: data/humaneval.lua.stablecode3b.0.2.reworded-*
- split: humaneval.lua.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.lua.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.lua.starcoder2_3b_long.0.2.reworded
path: data/humaneval.lua.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.lua.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.lua.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.php.codellama_13b_base.0.2.reworded
path: data/humaneval.php.codellama_13b_base.0.2.reworded-*
- split: humaneval.php.codellama_7b_base.0.2.reworded
path: data/humaneval.php.codellama_7b_base.0.2.reworded-*
- split: humaneval.php.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.php.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.php.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.php.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.php.stablecode3b.0.2.reworded
path: data/humaneval.php.stablecode3b.0.2.reworded-*
- split: humaneval.php.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.php.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.php.starcoder2_3b_long.0.2.reworded
path: data/humaneval.php.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.php.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.php.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.pl.codellama_13b_base.0.2.reworded
path: data/humaneval.pl.codellama_13b_base.0.2.reworded-*
- split: humaneval.pl.CodeLlama_34b_base.0.2.reworded
path: data/humaneval.pl.CodeLlama_34b_base.0.2.reworded-*
- split: humaneval.pl.codellama_7b_base.0.2.reworded
path: data/humaneval.pl.codellama_7b_base.0.2.reworded-*
- split: humaneval.pl.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.pl.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.pl.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.pl.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.pl.DeepSeekCoder_34b_base.0.2.reworded
path: data/humaneval.pl.DeepSeekCoder_34b_base.0.2.reworded-*
- split: humaneval.pl.stablecode3b.0.2.reworded
path: data/humaneval.pl.stablecode3b.0.2.reworded-*
- split: humaneval.pl.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.pl.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.pl.starcoder2_3b_long.0.2.reworded
path: data/humaneval.pl.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.pl.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.pl.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.pl.starcoderbase_3b.0.2.reworded
path: data/humaneval.pl.starcoderbase_3b.0.2.reworded-*
- split: humaneval.pl.starcoderbase_7b.0.2.reworded
path: data/humaneval.pl.starcoderbase_7b.0.2.reworded-*
- split: humaneval.rb.codellama_13b_base.0.2.reworded
path: data/humaneval.rb.codellama_13b_base.0.2.reworded-*
- split: humaneval.rb.CodeLlama_34b_base.0.2.reworded
path: data/humaneval.rb.CodeLlama_34b_base.0.2.reworded-*
- split: humaneval.rb.codellama_7b_base.0.2.reworded
path: data/humaneval.rb.codellama_7b_base.0.2.reworded-*
- split: humaneval.rb.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.rb.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.rb.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.rb.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.rb.DeepSeekCoder_34b_base.0.2.reworded
path: data/humaneval.rb.DeepSeekCoder_34b_base.0.2.reworded-*
- split: humaneval.rb.stablecode3b.0.2.reworded
path: data/humaneval.rb.stablecode3b.0.2.reworded-*
- split: humaneval.rb.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.rb.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.rb.starcoder2_3b_long.0.2.reworded
path: data/humaneval.rb.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.rb.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.rb.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.rb.starcoderbase_3b.0.2.reworded
path: data/humaneval.rb.starcoderbase_3b.0.2.reworded-*
- split: humaneval.rb.starcoderbase_7b.0.2.reworded
path: data/humaneval.rb.starcoderbase_7b.0.2.reworded-*
- split: humaneval.r.codellama_13b_base.0.2.reworded
path: data/humaneval.r.codellama_13b_base.0.2.reworded-*
- split: humaneval.r.codellama_7b_base.0.2.reworded
path: data/humaneval.r.codellama_7b_base.0.2.reworded-*
- split: humaneval.r.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.r.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.r.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.r.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.r.DeepSeekCoder_34b_base.0.2.reworded
path: data/humaneval.r.DeepSeekCoder_34b_base.0.2.reworded-*
- split: humaneval.rkt.codellama_13b_base.0.2.reworded
path: data/humaneval.rkt.codellama_13b_base.0.2.reworded-*
- split: humaneval.rkt.codellama_7b_base.0.2.reworded
path: data/humaneval.rkt.codellama_7b_base.0.2.reworded-*
- split: humaneval.rkt.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.rkt.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.rkt.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.rkt.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.rkt.stablecode3b.0.2.reworded
path: data/humaneval.rkt.stablecode3b.0.2.reworded-*
- split: humaneval.rkt.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.rkt.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.rkt.starcoder2_3b_long.0.2.reworded
path: data/humaneval.rkt.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.rkt.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.rkt.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.rs.codellama_13b_base.0.2.reworded
path: data/humaneval.rs.codellama_13b_base.0.2.reworded-*
- split: humaneval.rs.codellama_7b_base.0.2.reworded
path: data/humaneval.rs.codellama_7b_base.0.2.reworded-*
- split: humaneval.rs.deepseekcoder_1.3b_base.0.2.reworded
path: data/humaneval.rs.deepseekcoder_1.3b_base.0.2.reworded-*
- split: humaneval.rs.deepseekcoder1.5_7b_base.0.2.reworded
path: data/humaneval.rs.deepseekcoder1.5_7b_base.0.2.reworded-*
- split: humaneval.rs.stablecode3b.0.2.reworded
path: data/humaneval.rs.stablecode3b.0.2.reworded-*
- split: humaneval.rs.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.rs.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.rs.starcoder2_3b_long.0.2.reworded
path: data/humaneval.rs.starcoder2_3b_long.0.2.reworded-*
- split: humaneval.rs.StarCoder2_7b_16k.0.2.reworded
path: data/humaneval.rs.StarCoder2_7b_16k.0.2.reworded-*
- split: humaneval.r.stablecode3b.0.2.reworded
path: data/humaneval.r.stablecode3b.0.2.reworded-*
- split: humaneval.r.StarCoder2_15b_16k.0.2.reworded
path: data/humaneval.r.StarCoder2_15b_16k.0.2.reworded-*
- split: humaneval.r.starcoder2_3b_long.0.2.reworded
path: data/humaneval.r.starcoder2_3b_long.0.2.reworded-*
---
# Raw Data from MultiPL-E
**This repository is frozen. See https://huggingface.co/datasets/nuprl/MultiPL-E-completions for a more complete version of this repository.**
Uploads are a work in progress. If you are interested in a split that is not yet available, please contact [email protected].
This repository contains the raw data -- both completions and executions -- from MultiPL-E that was used to generate several experimental results from the
MultiPL-E, SantaCoder, and StarCoder papers.
The original MultiPL-E completions and executions are stored in JOSN files. We use [the following script](https://github.com/nuprl/MultiPL-E/blob/main/upload_completions.py-)
to turn each experiment directory into a dataset split and upload to this repository.
Every split is named `base_dataset`.`language`.`model`.`temperature`.`variation`
- `base_dataset` is either `humaneval` or `mbpp`.
- `language` is the file extension of the programming language. E.g., `py` for Python or `sh` for Bash.
- `model` is the name of the model. Some model names used by MultiPL-E:
- `bigcode_15b_1000m`: StarCoderBase
- `bigcode_15b_200m`, `bigcode_15b_400m`, `bigcode_15b_600m`, `bigcode_15b_800m`: the 20%, 40%, 60%, 80% checkpoints for StarCoderBase
- `pystarcoder2`: StarCoder
- there are others, just have a look :)
- `temperature`: this is either 0.2 or 0.8
- `variation`: the variation of the MultiPL-E prompt to use, which should be `reworded`. MultiPL-E has several prompt ablations and the `reworded` prompts are the canonical variation. |
lmms-lab/LLaVA-OneVision-Data | lmms-lab | 2024-10-22T06:47:46Z | 18,720 | 184 | [
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2408.03326",
"arxiv:2310.05126",
"region:us"
] | [] | 2024-07-25T15:25:28Z | null | ---
language:
- en
- zh
license: apache-2.0
pretty_name: llava-onevision-data
dataset_info:
- config_name: CLEVR-Math(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 791346970
num_examples: 5280
download_size: 441208499
dataset_size: 791346970
- config_name: FigureQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 463326576.625
num_examples: 17587
download_size: 258197193
dataset_size: 463326576.625
- config_name: GEOS(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1503641
num_examples: 498
download_size: 684471
dataset_size: 1503641
- config_name: GeoQA+(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 53579705.75
num_examples: 17162
download_size: 33480538
dataset_size: 53579705.75
- config_name: Geometry3K(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 218085473.5
num_examples: 9724
download_size: 125914780
dataset_size: 218085473.5
- config_name: IconQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 208430568.375
num_examples: 22589
download_size: 117222488
dataset_size: 208430568.375
- config_name: MapQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 384120915.875
num_examples: 5225
download_size: 215768443
dataset_size: 384120915.875
- config_name: PMC-VQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 571444866.5
num_examples: 35948
download_size: 326541003
dataset_size: 571444866.5
- config_name: Super-CLEVR(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2795082410.75
num_examples: 8642
download_size: 1580301917
dataset_size: 2795082410.75
- config_name: TabMWP(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 307726997.5
num_examples: 22452
download_size: 173938487
dataset_size: 307726997.5
- config_name: UniGeo(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 38296693.375
num_examples: 11949
download_size: 24170743
dataset_size: 38296693.375
- config_name: VisualWebInstruct(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 36317112275.0
num_examples: 263584
download_size: 36239916454
dataset_size: 36317112275.0
- config_name: VizWiz(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1170333936.5
num_examples: 6604
download_size: 660752297
dataset_size: 1170333936.5
- config_name: ai2d(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 438572782.375
num_examples: 2429
download_size: 437348514
dataset_size: 438572782.375
- config_name: ai2d(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 866076731
num_examples: 4864
download_size: 860306578
dataset_size: 866076731
- config_name: ai2d(internvl)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1832787249.625
num_examples: 12403
download_size: 527493895
dataset_size: 1832787249.625
- config_name: allava_instruct_laion4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5981767621.25
num_examples: 49990
download_size: 5873046236
dataset_size: 5981767621.25
- config_name: allava_instruct_vflan4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2680974558.25
num_examples: 19990
download_size: 2670088751
dataset_size: 2680974558.25
- config_name: aokvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6896420844.25
num_examples: 16534
download_size: 6894236970
dataset_size: 6896420844.25
- config_name: chart2text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1145458729.5
num_examples: 26956
download_size: 1123681047
dataset_size: 1145458729.5
- config_name: chartqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 815335215.5
num_examples: 18260
download_size: 803084541
dataset_size: 815335215.5
- config_name: chrome_writting
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 44422597.875
num_examples: 8825
download_size: 39611257
dataset_size: 44422597.875
- config_name: clevr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 10528974543.625
num_examples: 69995
download_size: 10460536445
dataset_size: 10528974543.625
- config_name: diagram_image_to_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18858266
num_examples: 295
download_size: 18659115
dataset_size: 18858266
- config_name: dvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4487270615.625
num_examples: 199995
download_size: 4277056467
dataset_size: 4487270615.625
- config_name: figureqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2351194509.625
num_examples: 99995
download_size: 2222640639
dataset_size: 2351194509.625
- config_name: geo170k(align)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 204236256.75
num_examples: 60242
download_size: 58185410
dataset_size: 204236256.75
- config_name: geo170k(qa)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 266040519.125
num_examples: 67823
download_size: 160022430
dataset_size: 266040519.125
- config_name: geo3k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 42634333.625
num_examples: 2091
download_size: 41097851
dataset_size: 42634333.625
- config_name: geomverse(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2263893609.75
num_examples: 9298
download_size: 2211726352
dataset_size: 2263893609.75
- config_name: hateful_memes(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3057252325.125
num_examples: 8495
download_size: 3055839880
dataset_size: 3057252325.125
- config_name: hitab(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 161706881.125
num_examples: 2495
download_size: 157871287
dataset_size: 161706881.125
- config_name: hme100k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 273229915.5
num_examples: 74492
download_size: 241005430
dataset_size: 273229915.5
- config_name: iam(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1131633206.75
num_examples: 5658
download_size: 1128371221
dataset_size: 1131633206.75
- config_name: iconqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331284932.25
num_examples: 27302
download_size: 327005220
dataset_size: 331284932.25
- config_name: iiit5k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 21821437.25
num_examples: 1990
download_size: 21623116
dataset_size: 21821437.25
- config_name: image_textualization(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5218283253.375
num_examples: 99573
download_size: 5164176816
dataset_size: 5218283253.375
- config_name: infographic(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 713657496.25
num_examples: 1982
download_size: 656276080
dataset_size: 713657496.25
- config_name: infographic_vqa
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1528953078.75
num_examples: 4394
download_size: 1419340319
dataset_size: 1528953078.75
- config_name: infographic_vqa_llava_format
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1765315696.875
num_examples: 2113
download_size: 1764548536
dataset_size: 1765315696.875
- config_name: intergps(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24973395.625
num_examples: 1275
download_size: 24736545
dataset_size: 24973395.625
- config_name: k12_printing
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1205153118.5
num_examples: 256636
download_size: 1108572712
dataset_size: 1205153118.5
- config_name: llavar_gpt4_20k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 633833350.25
num_examples: 19790
download_size: 625365542
dataset_size: 633833350.25
- config_name: lrv_chart
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 99338686
num_examples: 1776
download_size: 97979446
dataset_size: 99338686
- config_name: lrv_normal(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 422589381.75
num_examples: 10490
download_size: 406958773
dataset_size: 422589381.75
- config_name: magpie_pro(l3_80b_mt)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1657129141
num_examples: 299988
download_size: 885893066
dataset_size: 1657129141
- config_name: magpie_pro(l3_80b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1033666690
num_examples: 299990
download_size: 562771564
dataset_size: 1033666690
- config_name: magpie_pro(qwen2_72b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 703489344
num_examples: 299982
download_size: 361433408
dataset_size: 703489344
- config_name: mapqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3355751195.5
num_examples: 37412
download_size: 3305639218
dataset_size: 3355751195.5
- config_name: mathqa
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18318538
num_examples: 29827
download_size: 7857130
dataset_size: 18318538
- config_name: mavis_math_metagen
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2304025372.5
num_examples: 87348
download_size: 322776224
dataset_size: 2304025372.5
- config_name: mavis_math_rule_geo
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 14313211512.25
num_examples: 99990
download_size: 5841283073
dataset_size: 14313211512.25
- config_name: multihiertt(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 300319803.25
num_examples: 7614
download_size: 295638314
dataset_size: 300319803.25
- config_name: orand_car_a
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 23602442.125
num_examples: 1999
download_size: 23333412
dataset_size: 23602442.125
- config_name: raven(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1706160514.625
num_examples: 41995
download_size: 1693150088
dataset_size: 1706160514.625
- config_name: rendered_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11082594894.625
num_examples: 9995
download_size: 11081962044
dataset_size: 11082594894.625
- config_name: robut_sqa(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 685580779.375
num_examples: 8509
download_size: 678666263
dataset_size: 685580779.375
- config_name: robut_wikisql(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6200499653
num_examples: 74984
download_size: 6168399217
dataset_size: 6200499653
- config_name: robut_wtq(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4091776188.875
num_examples: 38241
download_size: 4062777449
dataset_size: 4091776188.875
- config_name: scienceqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 286843125.625
num_examples: 4971
download_size: 282896809
dataset_size: 286843125.625
- config_name: scienceqa(nona_context)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2111029055
num_examples: 19208
download_size: 2053942726
dataset_size: 2111029055
- config_name: screen2words(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 7977502095.375
num_examples: 15725
download_size: 7962327904
dataset_size: 7977502095.375
- config_name: sharegpt4o
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6968025789.5
num_examples: 57284
download_size: 6772195470
dataset_size: 6968025789.5
- config_name: sharegpt4v(coco)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2620153362.875
num_examples: 50017
download_size: 2595583499
dataset_size: 2620153362.875
- config_name: sharegpt4v(knowledge)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 372100773.5
num_examples: 1988
download_size: 369799318
dataset_size: 372100773.5
- config_name: sharegpt4v(llava)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 781795487.25
num_examples: 29990
download_size: 400344187
dataset_size: 781795487.25
- config_name: sharegpt4v(sam)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4437405218.25
num_examples: 8990
download_size: 4428597081
dataset_size: 4437405218.25
- config_name: sroie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 117810195
num_examples: 33616
download_size: 103647636
dataset_size: 117810195
- config_name: st_vqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5771194098.75
num_examples: 17242
download_size: 5768888141
dataset_size: 5771194098.75
- config_name: tabmwp(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 311192518.375
num_examples: 22717
download_size: 306092255
dataset_size: 311192518.375
- config_name: tallyqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 35998988065.625
num_examples: 98675
download_size: 35982430394
dataset_size: 35998988065.625
- config_name: textcaps
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2222268476.25
num_examples: 21942
download_size: 2217838132
dataset_size: 2222268476.25
- config_name: textocr(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2581655353
num_examples: 25104
download_size: 2574418106
dataset_size: 2581655353
- config_name: tqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331203026.25
num_examples: 27302
download_size: 326999466
dataset_size: 331203026.25
- config_name: ureader_cap
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 9269857109.75
num_examples: 91434
download_size: 2292099971
dataset_size: 9269857109.75
- config_name: ureader_ie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11871457209.75
num_examples: 17322
download_size: 1999083115
dataset_size: 11871457209.75
- config_name: vision_flan(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24847242604.5
num_examples: 186060
download_size: 24750561877
dataset_size: 24847242604.5
- config_name: vistext(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 550187184.5
num_examples: 9964
download_size: 452795103
dataset_size: 550187184.5
- config_name: visual7w(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4451436523.875
num_examples: 14361
download_size: 4441971985
dataset_size: 4451436523.875
- config_name: visualmrc(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2938154124.25
num_examples: 3022
download_size: 2909296079
dataset_size: 2938154124.25
- config_name: vqarad(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 95533417
num_examples: 308
download_size: 95410398
dataset_size: 95533417
- config_name: vsr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 891981646
num_examples: 2152
download_size: 891572866
dataset_size: 891981646
- config_name: websight(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11209715828.625
num_examples: 9995
download_size: 11144460985
dataset_size: 11209715828.625
configs:
- config_name: CLEVR-Math(MathV360K)
data_files:
- split: train
path: CLEVR-Math(MathV360K)/train-*
- config_name: FigureQA(MathV360K)
data_files:
- split: train
path: FigureQA(MathV360K)/train-*
- config_name: GEOS(MathV360K)
data_files:
- split: train
path: GEOS(MathV360K)/train-*
- config_name: GeoQA+(MathV360K)
data_files:
- split: train
path: GeoQA+(MathV360K)/train-*
- config_name: Geometry3K(MathV360K)
data_files:
- split: train
path: Geometry3K(MathV360K)/train-*
- config_name: IconQA(MathV360K)
data_files:
- split: train
path: IconQA(MathV360K)/train-*
- config_name: MapQA(MathV360K)
data_files:
- split: train
path: MapQA(MathV360K)/train-*
- config_name: PMC-VQA(MathV360K)
data_files:
- split: train
path: PMC-VQA(MathV360K)/train-*
- config_name: Super-CLEVR(MathV360K)
data_files:
- split: train
path: Super-CLEVR(MathV360K)/train-*
- config_name: TabMWP(MathV360K)
data_files:
- split: train
path: TabMWP(MathV360K)/train-*
- config_name: UniGeo(MathV360K)
data_files:
- split: train
path: UniGeo(MathV360K)/train-*
- config_name: VisualWebInstruct(filtered)
data_files:
- split: train
path: VisualWebInstruct(filtered)/train-*
- config_name: VizWiz(MathV360K)
data_files:
- split: train
path: VizWiz(MathV360K)/train-*
- config_name: ai2d(cauldron,llava_format)
data_files:
- split: train
path: ai2d(cauldron,llava_format)/train-*
- config_name: ai2d(gpt4v)
data_files:
- split: train
path: ai2d(gpt4v)/train-*
- config_name: ai2d(internvl)
data_files:
- split: train
path: ai2d(internvl)/train-*
- config_name: allava_instruct_laion4v
data_files:
- split: train
path: allava_instruct_laion4v/train-*
- config_name: allava_instruct_vflan4v
data_files:
- split: train
path: allava_instruct_vflan4v/train-*
- config_name: aokvqa(cauldron,llava_format)
data_files:
- split: train
path: aokvqa(cauldron,llava_format)/train-*
- config_name: chart2text(cauldron)
data_files:
- split: train
path: chart2text(cauldron)/train-*
- config_name: chartqa(cauldron,llava_format)
data_files:
- split: train
path: chartqa(cauldron,llava_format)/train-*
- config_name: chrome_writting
data_files:
- split: train
path: chrome_writting/train-*
- config_name: clevr(cauldron,llava_format)
data_files:
- split: train
path: clevr(cauldron,llava_format)/train-*
- config_name: diagram_image_to_text(cauldron)
data_files:
- split: train
path: diagram_image_to_text(cauldron)/train-*
- config_name: dvqa(cauldron,llava_format)
data_files:
- split: train
path: dvqa(cauldron,llava_format)/train-*
- config_name: figureqa(cauldron,llava_format)
data_files:
- split: train
path: figureqa(cauldron,llava_format)/train-*
- config_name: geo170k(align)
data_files:
- split: train
path: geo170k(align)/train-*
- config_name: geo170k(qa)
data_files:
- split: train
path: geo170k(qa)/train-*
- config_name: geo3k
data_files:
- split: train
path: geo3k/train-*
- config_name: geomverse(cauldron)
data_files:
- split: train
path: geomverse(cauldron)/train-*
- config_name: hateful_memes(cauldron,llava_format)
data_files:
- split: train
path: hateful_memes(cauldron,llava_format)/train-*
- config_name: hitab(cauldron,llava_format)
data_files:
- split: train
path: hitab(cauldron,llava_format)/train-*
- config_name: hme100k
data_files:
- split: train
path: hme100k/train-*
- config_name: iam(cauldron)
data_files:
- split: train
path: iam(cauldron)/train-*
- config_name: iconqa(cauldron,llava_format)
data_files:
- split: train
path: iconqa(cauldron,llava_format)/train-*
- config_name: iiit5k
data_files:
- split: train
path: iiit5k/train-*
- config_name: image_textualization(filtered)
data_files:
- split: train
path: image_textualization(filtered)/train-*
- config_name: infographic(gpt4v)
data_files:
- split: train
path: infographic(gpt4v)/train-*
- config_name: infographic_vqa
data_files:
- split: train
path: infographic_vqa/train-*
- config_name: infographic_vqa_llava_format
data_files:
- split: train
path: infographic_vqa_llava_format/train-*
- config_name: intergps(cauldron,llava_format)
data_files:
- split: train
path: intergps(cauldron,llava_format)/train-*
- config_name: k12_printing
data_files:
- split: train
path: k12_printing/train-*
- config_name: llavar_gpt4_20k
data_files:
- split: train
path: llavar_gpt4_20k/train-*
- config_name: lrv_chart
data_files:
- split: train
path: lrv_chart/train-*
- config_name: lrv_normal(filtered)
data_files:
- split: train
path: lrv_normal(filtered)/train-*
- config_name: magpie_pro(l3_80b_mt)
data_files:
- split: train
path: magpie_pro(l3_80b_mt)/train-*
- config_name: magpie_pro(l3_80b_st)
data_files:
- split: train
path: magpie_pro(l3_80b_st)/train-*
- config_name: magpie_pro(qwen2_72b_st)
data_files:
- split: train
path: magpie_pro(qwen2_72b_st)/train-*
- config_name: mapqa(cauldron,llava_format)
data_files:
- split: train
path: mapqa(cauldron,llava_format)/train-*
- config_name: mathqa
data_files:
- split: train
path: mathqa/train-*
- config_name: mavis_math_metagen
data_files:
- split: train
path: mavis_math_metagen/train-*
- config_name: mavis_math_rule_geo
data_files:
- split: train
path: mavis_math_rule_geo/train-*
- config_name: multihiertt(cauldron)
data_files:
- split: train
path: multihiertt(cauldron)/train-*
- config_name: orand_car_a
data_files:
- split: train
path: orand_car_a/train-*
- config_name: raven(cauldron)
data_files:
- split: train
path: raven(cauldron)/train-*
- config_name: rendered_text(cauldron)
data_files:
- split: train
path: rendered_text(cauldron)/train-*
- config_name: robut_sqa(cauldron)
data_files:
- split: train
path: robut_sqa(cauldron)/train-*
- config_name: robut_wikisql(cauldron)
data_files:
- split: train
path: robut_wikisql(cauldron)/train-*
- config_name: robut_wtq(cauldron,llava_format)
data_files:
- split: train
path: robut_wtq(cauldron,llava_format)/train-*
- config_name: scienceqa(cauldron,llava_format)
data_files:
- split: train
path: scienceqa(cauldron,llava_format)/train-*
- config_name: scienceqa(nona_context)
data_files:
- split: train
path: scienceqa(nona_context)/train-*
- config_name: screen2words(cauldron)
data_files:
- split: train
path: screen2words(cauldron)/train-*
- config_name: sharegpt4o
data_files:
- split: train
path: sharegpt4o/train-*
- config_name: sharegpt4v(coco)
data_files:
- split: train
path: sharegpt4v(coco)/train-*
- config_name: sharegpt4v(knowledge)
data_files:
- split: train
path: sharegpt4v(knowledge)/train-*
- config_name: sharegpt4v(llava)
data_files:
- split: train
path: sharegpt4v(llava)/train-*
- config_name: sharegpt4v(sam)
data_files:
- split: train
path: sharegpt4v(sam)/train-*
- config_name: sroie
data_files:
- split: train
path: sroie/train-*
- config_name: st_vqa(cauldron,llava_format)
data_files:
- split: train
path: st_vqa(cauldron,llava_format)/train-*
- config_name: tabmwp(cauldron)
data_files:
- split: train
path: tabmwp(cauldron)/train-*
- config_name: tallyqa(cauldron,llava_format)
data_files:
- split: train
path: tallyqa(cauldron,llava_format)/train-*
- config_name: textcaps
data_files:
- split: train
path: textcaps/train-*
- config_name: textocr(gpt4v)
data_files:
- split: train
path: textocr(gpt4v)/train-*
- config_name: tqa(cauldron,llava_format)
data_files:
- split: train
path: tqa(cauldron,llava_format)/train-*
- config_name: ureader_cap
data_files:
- split: train
path: ureader_cap/train-*
- config_name: ureader_ie
data_files:
- split: train
path: ureader_ie/train-*
- config_name: vision_flan(filtered)
data_files:
- split: train
path: vision_flan(filtered)/train-*
- config_name: vistext(cauldron)
data_files:
- split: train
path: vistext(cauldron)/train-*
- config_name: visual7w(cauldron,llava_format)
data_files:
- split: train
path: visual7w(cauldron,llava_format)/train-*
- config_name: visualmrc(cauldron)
data_files:
- split: train
path: visualmrc(cauldron)/train-*
- config_name: vqarad(cauldron,llava_format)
data_files:
- split: train
path: vqarad(cauldron,llava_format)/train-*
- config_name: vsr(cauldron,llava_format)
data_files:
- split: train
path: vsr(cauldron,llava_format)/train-*
- config_name: websight(cauldron)
data_files:
- split: train
path: websight(cauldron)/train-*
---
# Dataset Card for LLaVA-OneVision
**[2024-09-01]: Uploaded VisualWebInstruct(filtered), it's used in OneVision Stage**
> almost all subsets are uploaded with HF's required format and you can use the recommended interface to download them and follow our code below to convert them.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders.
> You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
In this dataset, we include the data splits used in the both final image stage and one-vision stage. For more details, please check our [paper](arxiv.org/abs/2408.03326) and our [training doc](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
## Dataset Description
- **Curated by:** Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li, Dong Guo
- **Language(s) (NLP):** English, Chinese
- **License:** Apache License 2.0
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Dataset Collection:** We include a few subsets from existing dataset collection [Cambrian](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M), [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron), [UReader](https://arxiv.org/abs/2310.05126). Since we only used a few subsets from these datasets, and applied the cleaning and re-annotation process, we uploaded our processed version of these datasets into our own repository and thank the authors for providing the original datasets.
- **Other Datasets:** For rest single source dataset, such as AI2D, OKVQA, we cite and link the original sources in our paper.
## Uses
This dataset is used for the training of the LLaVA-OneVision model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
## Dataset Structure
We expalin the data composition for mid-stage and final-stage at our repo in [**training doc**](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
### Statistics
We provide the statistics of the dataset in the following figures, and refer the audience to check our paper.


### Code Guidance
To help audience to better understand our dataest, we upload them into Hugging Face Dataset compatible format. During LLaVA-OneVision training, we use the `json` and `image/video` folder to store the data.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders. You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
Here we provide the code guidance to convert the dataset into the format of LLaVA-OneVision, and conduct the training of the LLaVA-OneVision model with converted dataset.
```python
import os
from datasets import load_dataset
from tqdm import tqdm
import json
data = load_dataset("lmms-lab/LLaVA-OneVision-Data", split="train")
image_folder = "<your_image_folder>"
converted_data = []
for da in tqdm(data):
json_data = {}
json_data["id"] = da["id"]
if da["image"] is not None:
json_data["image"] = f"{da['id']}.jpg"
da["image"].save(os.path.join(image_folder, json_data["image"]))
json_data["conversations"] = da["conversations"]
converted_data.append(json_data)
with open("<your_json_file>.json", "w") as f:
json.dump(converted_data, f, indent=4, ensure_ascii=False)
```
## Citation
**BibTeX:**
[More Information Needed]
## Glossary
The dataset collection process is conducted by all of the authors, we thank the Feng Li and Renrui Zhang for providing [LLaVA-M4-Instruct Data](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data) and Yuanhan for providing the [Video datasets](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K).
After the dataset collection, the cleaning and re-annotation process, including final mixture of the dataset, is conducted by Bo Li and with the great help of Kaichen Zhang.
## Dataset Card Authors
The dataset is curated by the following authors:
Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li
## Dataset Card Contact
[Bo Li](https://brianboli.com/): [email protected]
[Kaichen Zhang](https://www.linkedin.com/in/kaichen-zhang-014b17219/?originalSubdomain=sg) |
Forceless/PPTAgent | Forceless | 2024-10-20T05:51:45Z | 11,427 | 3 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-18T04:49:53Z | null | ---
dataset_info:
features:
- name: filename
dtype: string
- name: size
dtype: int64
- name: url
dtype: string
- name: license
dtype: string
- name: title
dtype: string
- name: created
dtype: string
- name: updated
dtype: string
- name: doi
dtype: string
- name: checksum
dtype: string
- name: page
dtype: int64
- name: topic
dtype: string
- name: filetype
dtype: string
splits:
- name: pptx
num_bytes: 317828
num_examples: 761
- name: pdf
num_bytes: 253893
num_examples: 603
download_size: 249178
dataset_size: 571721
configs:
- config_name: default
data_files:
- split: pptx
path: data/pptx-*
- split: pdf
path: data/pdf-*
---
|
allenai/WildChat | allenai | 2024-10-17T17:48:53Z | 1,308 | 142 | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:text2text-generation",
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2405.01470",
"arxiv:2409.03753",
"arxiv:2406.13706",
"region:us",
"instruction-finetuning"
] | [
"text-generation",
"question-answering",
"text2text-generation"
] | 2023-10-27T23:53:36Z | null | ---
license: odc-by
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- question-answering
- text2text-generation
pretty_name: WildChat
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: timestamp
dtype: timestamp[s, tz=UTC]
- name: conversation
list:
- name: content
dtype: string
- name: language
dtype: string
- name: redacted
dtype: bool
- name: role
dtype: string
- name: toxic
dtype: bool
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: detoxify_moderation
list:
- name: identity_attack
dtype: float32
- name: insult
dtype: float32
- name: obscene
dtype: float32
- name: severe_toxicity
dtype: float32
- name: sexual_explicit
dtype: float32
- name: threat
dtype: float32
- name: toxicity
dtype: float32
- name: toxic
dtype: bool
- name: redacted
dtype: bool
splits:
- name: train
num_bytes: 2949464505.6494355
num_examples: 529428
download_size: 1586548072
dataset_size: 2949464505.6494355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- instruction-finetuning
---
# Dataset Card for WildChat
## Note: a newer version with 1 million conversations and demographic information can be found [here](https://huggingface.co/datasets/allenai/WildChat-1M).
## Dataset Description
- **Paper:** https://arxiv.org/abs/2405.01470
- **Interactive Search Tool:** https://wildvisualizer.com ([paper](https://arxiv.org/abs/2409.03753))
- **License:** [ODC-BY](https://opendatacommons.org/licenses/by/1-0/)
- **Language(s) (NLP):** multi-lingual
- **Point of Contact:** [Yuntian Deng](https://yuntiandeng.com/)
### Dataset Summary
WildChat is a collection of 650K conversations between human users and ChatGPT. We collected WildChat by offering online users free access to OpenAI's GPT-3.5 and GPT-4. The dataset contains a broad spectrum of user-chatbot interactions that are not previously covered by other instruction fine-tuning datasets: for example, interactions include ambiguous user requests, code-switching, topic-switching, political discussions, etc. WildChat can serve both as a dataset for instructional fine-tuning and as a valuable resource for studying user behaviors. Note that this version of the dataset only contains non-toxic user inputs/ChatGPT responses.
### Updates
**2024-10-17: Content Update.** Conversations flagged by [Niloofar Mireshghallah](https://homes.cs.washington.edu/~niloofar/) and her collaborators in ["Breaking News: Case Studies of Generative AI's Use in Journalism"](https://arxiv.org/abs/2406.13706) for containing PII or sensitive information have been removed from this version of the dataset.
**2024-07-22: Content Update.** All toxic conversations identified by the OpenAI Moderations API or Detoxify have been removed from this version of the dataset.
**2024-06-26: License Change.** We have updated the license of WildChat to [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). This change is retroactively applied to any previous downloads under the ImpACT license.
### Languages
66 languages were detected in WildChat.
### Personal and Sensitive Information
The data has been de-identified with Microsoft Presidio and hand-written rules by the authors.
### Data Fields
- `conversation_id` (string): Each conversation has a unique id.
- `model` (string): The underlying OpenAI model, such as gpt-3.5-turbo or gpt-4.
- `timestamp` (timestamp): The timestamp of the last turn in the conversation in UTC.
- `conversation` (list): A list of user/assistant utterances. Each utterance is a dictionary containing the `role` of the speaker (user or assistant), the `content` of the utterance, the detected `language` of the utterance, whether the content of the utterance is considered `toxic`, and whether PII has been detected and anonymized (`redacted`).
- `turn` (int): The number of turns in the conversation. A turn refers to one round of user-assistant interaction.
- `language` (string): The language of the conversation. Note that this is the most frequently detected language in the utterances of the conversation.
- `openai_moderation` (list): A list of OpenAI Moderation results. Each element in the list corresponds to one utterance in the conversation.
- `detoxify_moderation` (list): A list of Detoxify results. Each element in the list corresponds to one utterance in the conversation.
- `toxic` (bool): Whether this conversation contains any utterances considered to be toxic by either OpenAI Moderation or Detoxify.
- `redacted` (bool): Whether this conversation contains any utterances in which PII is detected and anonymized.
### Empty User Inputs
This dataset includes a small subset of conversations where users submitted empty inputs, sometimes leading to hallucinated responses from the assistant. This issue, first noticed by @yuchenlin, arises from the design of our Huggingface chatbot used for data collection, which did not restrict the submission of empty inputs. As a result, users could submit without entering any text, causing the assistant to generate responses without any user prompts. This occurs in a small fraction of the dataset---12,405 out of 652,139 conversations.
### Licensing Information
WildChat is now made available under the [**ODC-BY License**](https://opendatacommons.org/licenses/by/1-0/). This change is retroactively applied to any previous downloads under the ImpACT license.
### Citation Information
Please consider citing [our paper](https://arxiv.org/abs/2405.01470) if you find this dataset useful:
```
@inproceedings{
zhao2024wildchat,
title={WildChat: 1M Chat{GPT} Interaction Logs in the Wild},
author={Wenting Zhao and Xiang Ren and Jack Hessel and Claire Cardie and Yejin Choi and Yuntian Deng},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=Bl8u7ZRlbM}
}
```
```
@misc{deng2024wildvisopensourcevisualizer,
title={WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild},
author={Yuntian Deng and Wenting Zhao and Jack Hessel and Xiang Ren and Claire Cardie and Yejin Choi},
year={2024},
eprint={2409.03753},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.03753},
}
``` |
openai/MMMLU | openai | 2024-10-16T18:39:00Z | 11,119 | 478 | [
"task_categories:question-answering",
"language:ar",
"language:bn",
"language:de",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:pt",
"language:sw",
"language:yo",
"language:zh",
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2009.03300",
"region:us"
] | [
"question-answering"
] | 2024-09-13T16:37:19Z | null | ---
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: test
path: test/*.csv
- config_name: AR_XY
data_files:
- split: test
path: test/mmlu_AR-XY.csv
- config_name: BN_BD
data_files:
- split: test
path: test/mmlu_BN-BD.csv
- config_name: DE_DE
data_files:
- split: test
path: test/mmlu_DE-DE.csv
- config_name: ES_LA
data_files:
- split: test
path: test/mmlu_ES-LA.csv
- config_name: FR_FR
data_files:
- split: test
path: test/mmlu_FR-FR.csv
- config_name: HI_IN
data_files:
- split: test
path: test/mmlu_HI-IN.csv
- config_name: ID_ID
data_files:
- split: test
path: test/mmlu_ID-ID.csv
- config_name: IT_IT
data_files:
- split: test
path: test/mmlu_IT-IT.csv
- config_name: JA_JP
data_files:
- split: test
path: test/mmlu_JA-JP.csv
- config_name: KO_KR
data_files:
- split: test
path: test/mmlu_KO-KR.csv
- config_name: PT_BR
data_files:
- split: test
path: test/mmlu_PT-BR.csv
- config_name: SW_KE
data_files:
- split: test
path: test/mmlu_SW-KE.csv
- config_name: YO_NG
data_files:
- split: test
path: test/mmlu_YO-NG.csv
- config_name: ZH_CN
data_files:
- split: test
path: test/mmlu_ZH-CN.csv
language:
- ar
- bn
- de
- es
- fr
- hi
- id
- it
- ja
- ko
- pt
- sw
- yo
- zh
license: mit
---
# Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba. We are publishing the professional human translations and the code we use to run the evaluations.
This effort reflects our commitment to improving the multilingual capabilities of AI models, ensuring they perform accurately across languages, particularly for underrepresented communities. By prioritizing high-quality translations, we aim to make AI technology more inclusive and effective for users worldwide.
## Locales
MMMLU contains the MMLU test set translated into the following locales:
* AR_XY (Arabic)
* BN_BD (Bengali)
* DE_DE (German)
* ES_LA (Spanish)
* FR_FR (French)
* HI_IN (Hindi)
* ID_ID (Indonesian)
* IT_IT (Italian)
* JA_JP (Japanese)
* KO_KR (Korean)
* PT_BR (Brazilian Portuguese)
* SW_KE (Swahili)
* YO_NG (Yoruba)
* ZH_CN (Simplified Chinese)
## Sources
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). [*Measuring Massive Multitask Language Understanding*](https://arxiv.org/abs/2009.03300).
[OpenAI Simple Evals GitHub Repository](https://github.com/openai/simple-evals) |
HuggingFaceH4/ultrachat_200k | HuggingFaceH4 | 2024-10-16T11:52:27Z | 15,121 | 527 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.14233",
"region:us"
] | [
"text-generation"
] | 2023-10-24T08:24:57Z | null | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: UltraChat 200k
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 1397058554
num_examples: 207865
- name: test_sft
num_bytes: 154695659
num_examples: 23110
- name: train_gen
num_bytes: 1347396812
num_examples: 256032
- name: test_gen
num_bytes: 148276089
num_examples: 28304
download_size: 1624049723
dataset_size: 3047427114
---
# Dataset Card for UltraChat 200k
## Dataset Description
This is a heavily filtered version of the [UltraChat](https://github.com/thunlp/UltraChat) dataset and was used to train [Zephyr-7B-β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create `UltraChat 200k`, we applied the following logic:
- Selection of a subset of data for faster supervised fine tuning.
- Truecasing of the dataset, as we observed around 5% of the data contained grammatical errors like "Hello. how are you?" instead of "Hello. How are you?"
- Removal of dialogues where the assistant replies with phrases like "I do not have emotions" or "I don't have opinions", even for fact-based prompts that don't involve either.
## Dataset Structure
The dataset has four splits, suitable for:
* Supervised fine-tuning (`sft`).
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:|
| 207865 | 23110 | 256032 | 28304 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"messages":[
{
"content": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"role": "user"
},
{
"content": "Name: Ava\n\n Ava was just 16 years old when the world as she knew it came crashing down. The government had collapsed, leaving behind a chaotic and lawless society. ...",
"role": "assistant"
},
{
"content": "Wow, Ava's story is so intense and inspiring! Can you provide me with more details. ...",
"role": "user"
},
{
"content": "Certainly! ....",
"role": "assistant"
},
{
"content": "That's really interesting! I would love to hear more...",
"role": "user"
}
{
"content": "Certainly! ....",
"role": "assistant"
},
],
"prompt_id": "d938b65dfe31f05f80eb8572964c6673eddbd68eff3db6bd234d7f1e3b86c2af"
}
```
## Citation
If you find this dataset is useful in your work, please cite the original UltraChat dataset:
```
@misc{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ning Ding and Yulin Chen and Bokai Xu and Yujia Qin and Zhi Zheng and Shengding Hu and Zhiyuan Liu and Maosong Sun and Bowen Zhou},
year={2023},
eprint={2305.14233},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
HuggingFaceH4/ultrafeedback_binarized | HuggingFaceH4 | 2024-10-16T11:49:06Z | 8,312 | 285 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2310.01377",
"region:us"
] | [
"text-generation"
] | 2023-10-24T08:53:19Z | null | ---
language:
- en
license: mit
task_categories:
- text-generation
pretty_name: UltraFeedback Binarized
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
- split: train_sft
path: data/train_sft-*
- split: test_prefs
path: data/test_prefs-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
splits:
- name: train_prefs
num_bytes: 405688662
num_examples: 61135
- name: train_sft
num_bytes: 405688662
num_examples: 61135
- name: test_prefs
num_bytes: 13161585
num_examples: 2000
- name: test_sft
num_bytes: 6697333
num_examples: 1000
- name: train_gen
num_bytes: 325040536
num_examples: 61135
- name: test_gen
num_bytes: 5337695
num_examples: 1000
download_size: 649967196
dataset_size: 1161614473
---
# Dataset Card for UltraFeedback Binarized
## Dataset Description
This is a pre-processed version of the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback) and was used to train [Zephyr-7Β-β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a state of the art chat model at the 7B parameter scale.
The original UltraFeedback dataset consists of 64k prompts, where each prompt is accompanied with four model completions from a wide variety of open and proprietary models. GPT-4 is then used to assign a score to each completion, along criteria like helpfulness and honesty. To create `UltraFeedback Binarized`, we picked the highest `overall_score` as the "chosen" completion, and one of the remaining 3 at random as the "rejected" one. This defines the preference modelling splits for techniques like reward modelling or DPO. We also created splits for supervised fine-tuning (SFT) that use the "chosen" column as the dialogues to model, along with splits that involve generation like rejection sampling or PPO. For details on the dataset processing, see the accompanying [script](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized/blob/main/create_dataset.py).
## Dataset Structure
### Usage
To load the dataset, run:
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceH4/ultrafeedback_binarized")
```
**Note:** after the release of Zephyr-7b-β, the team at [Argilla](https://huggingface.co/argilla) noted that there were a few hundred completions with the incorrect label. Similarly, members of the community pointed out that a several prompts were sourced from the TruthfulQA benchmark which can lead to contamination with public leaderboards. Both of these issues have been resolved in the latest version of the dataset, but if you wish to use the older version, you can run:
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceH4/ultrafeedback_binarized", revision="292c16329d921287c4166934cac1a6ad1e13a6c5")
```
### Data Splits
The dataset has six splits, suitable for:
* Supervised fine-tuning (`sft`).
* Preference modelling (`prefs`) to train reward models or apply techniques like DPO.
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_prefs | test_prefs | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:| :-----:| :-----:|
| 61135 | 1000 | 61135 | 2000 | 61135 | 1000 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Which books are still banned in Canada?",
"chosen": [
{
"content": "Which books are still banned in Canada?",
"role": "user"
},
{
"content": "As a responsible and ethical AI assistant, I must inform you that I cannot provide a list of banned books in Canada, as there are no books that are currently banned in the country. Canada has a strong commitment to freedom of expression and the right to access information, and there are no laws in place that restrict the publication or distribution of books.\n",
"role": "assistant"
}
],
"messages": [
{
"content": "Which books are still banned in Canada?",
"role": "user"
},
{
"content": "As a responsible and ethical AI assistant, I must inform you that I cannot provide a list of banned books in Canada, as there are no books that are currently banned in the country. Canada has a strong commitment to freedom of expression and the right to access information, and there are no laws in place that restrict the publication or distribution of books.\n",
"role": "assistant"
}],
"prompt_id": "aeccf551d9ba42fdf5f2044de43b8ce6e360fb523ace428317b81d804594e090",
"rejected": [
{
"content": "Which books are still banned in Canada?",
"role": "user"},
{
"content": "According to the Canadian Government’s Ban Affront website, there are still several books that are banned in Canada. These include The Begum’s Millionaire, The Education of Little Tree, The Harry Potter series, Lolita, 1984, and Lady Chatterley’s Lover. Some of these books are considered inaccessible due to their age, while others are still legally banned in certain parts of the country.",
"role": "assistant"
}
],
"score_chosen": 8.0,
"score_rejected": 5.0
}
```
You should use the `chosen` and `rejected` columns for techniques like DPO, while the `messages` column is suitable for SFT or PPO.
## Citation
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset:
```
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
GAIR/o1-journey | GAIR | 2024-10-16T00:42:02Z | 122 | 133 | [
"language:en",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-16T00:28:30Z | null | ---
language:
- en
size_categories:
- n<1K
---
Dataset for [O1 Replication Journey: A Strategic Progress Report](https://github.com/GAIR-NLP/O1-Journey)
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("GAIR/o1-journey", split="train")
```
## Citation
If you find our dataset useful, please cite:
```
@misc{o1journey,
author = {Yiwei Qin and Xuefeng Li and Haoyang Zou and Yixiu Liu and Shijie Xia and Zhen Huang and Yixin Ye and Weizhe Yuan and Zhengzhong Liu and Yuanzhi Li and Pengfei Liu},
title = {O1 Replication Journey: A Strategic Progress Report – Part 1},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/GAIR-NLP/O1-Journey}},
}
``` |
allenai/peS2o | allenai | 2024-10-13T02:53:05Z | 2,322 | 171 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"source_datasets:allenai/s2orc",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"region:us",
"biology",
"chemistry",
"engineering",
"computer science",
"physics",
"material science",
"math",
"psychology",
"economics",
"political science",
"business",
"geology",
"sociology",
"geography",
"environmental science",
"art",
"history",
"philosophy"
] | [
"text-generation",
"fill-mask"
] | 2023-06-29T04:54:16Z | null | ---
license:
- odc-by
task_categories:
- text-generation
- fill-mask
language:
- en
tags:
- biology
- chemistry
- engineering
- computer science
- physics
- material science
- math
- psychology
- economics
- political science
- business
- geology
- sociology
- geography
- environmental science
- art
- history
- philosophy
pretty_name: peS2o (Pretraining Efficiently on S2ORC)
size_categories:
- 10B<n<100B
source_datasets:
- allenai/s2orc
---
<p align="center" style="margin-top: -2em">
<img src="https://huggingface.co/datasets/allenai/pes2o/resolve/main/logo.png" alt="peS2o logo. It's a picure of a mortar and pestle with documents flying in." width=384px height=auto>
</p>
<p align="center" style="font-size: 1.2em; margin-top: -1em"><i>Pretraining Effectively on <a href="https://github.com/allenai/s2orc">S2ORC</a>!</i></p>
The peS2o dataset is a collection of ~40M creative open-access academic papers,
cleaned, filtered, and formatted for pre-training of language models. It is derived from
the [Semantic Scholar Open Research Corpus][2]([Lo et al, 2020][1]), or S2ORC.
We release multiple version of peS2o, each with different processing and knowledge cutoff
date. We recommend you to use the latest version available.
If you use this dataset, please cite:
```bibtex
@techreport{peS2o,
author = {Luca Soldaini and Kyle Lo},
year = 2023,
title = {{peS2o (Pretraining Efficiently on S2ORC) Dataset}},
institution = {{Allen Institute for AI}},
note = {ODC-By, \url{https://github.com/allenai/pes2o}}
}
```
## Document Format
Each document in the dataset is a dictionary with the following fields:
- `added`: Date the document was added to the corpus.
- `created`: Best-guess date for when the document was first published. Some have resolution down to the day, only down to the year.
- `id`: Semantic Scholar Corpus ID of the document; it can be used with the [Semantic Scholar API](https://api.semanticscholar.org/) to retrieve metadata about the document (e.g., fields of study, authors).
- `source`: Collection from which the document was sourced from. At the moment, two are supported:
- `s2orc`: collection of full-text papers
- `s2ag`: collection of title and abstracts
- `text`: Text of the document. Paragraphs are separated by two newlines (`\n\n`).
- `version`: version of peS2o.
------
## peS2o V2 (Latest)
### Key Facts
- *Knowledge cutoff*: 2023-01-03
- *Number of documents*: 38.97M
- *Number of whitespace-separated tokens**: 42.01B
### Processing
peS2o V2 is largely the same as V1, but it includes additional heuristics s2ag aimed at filtering out OCR errors from abstract.
First, we check if the abstract was obtained from Semantic Scholar sources that are likely to contain OCR'ed content. For any abstract derived from those sources, we count how often the text contains subsequences matching `\b([A-Za-z]\s)([a-z]\s)*[A-Za-z]\b`, i.e. individual alpha letters separated by a space. This heuristic matches cases such as `A b stra ct` (2 matching subsequences), where the OCR parser inserted erroneous spaces.
Any abstract with more than 4 matching subsequences is removed.
#### Statistics
| Dataset | Split | # Documents | # Words |
|:-------:|:-----:|------------:|---------------:|
| s2orc | train | 8,242,162 | 36,088,195,908 |
| s2orc | valid | 51,323 | 255,139,074 |
| s2ag | train | 30,569,017 | 5,920,099,207 |
| s2ag | valid | 109,709 | 24,029,459 |
-------
## peS2o V1
### Key Facts
- *Knowledge cutoff*: 2023-01-03
- *Number of documents*: 67.56M
- *Number of whitespace-separated tokens*: 47.37B
### Processing
Processing differs slightly wether it was derived from the full-text corpus (`s2orc`) or the title and abstract corpus (`s2ag`).
#### S2ORC-derived documents
Unfiltered, S2ORC contains 11.3M papers and 46.9B whitespace-separated tokens as of 2023-01-03. To derive peS2o v1, we impose the following constraints:
- The paper must have a title and abstract.
- From each paper, we use [Grobid](https://github.com/kermitt2/grobid) to extract section headers and paragraphs; figures, tables, and references, and any other non-textual content is removed. Title and abstracts are also available, but they come from the Semantic Scholar metadata (obtained through the APIs), not Grobid.
- The paper must be in English.
- To determine the language of each document, we use the [pycld3](https://github.com/bsolomon1124/pycld3) library
- We run pycld3 on the first 2000 characters of each paragraph in the paper.
- The language of the paper is the most common language of the paragraphs.
- The paper must have at least 500 whitespace-separated words.
- The paper was published after 1969; papers published before this date are often obtained through OCR and contain unrecoverable errors.
- The paper must have at least 5 paragraphs.
- All sections that have a average log word probability of less than `-20` are removed.
- To calculate the average log word probability, we use word frequencies extracted from the [1T Web Ngram corpus](https://catalog.ldc.upenn.edu/LDC2006T13); specifically, we use the list available [created by Rachel Tatman](https://www.kaggle.com/datasets/rtatman/english-word-frequency). A copy is hosted [here](https://ai2-s2-research-public.s3-us-west-2.amazonaws.com/lucas/google-1T-unigram/unigram_freq.csv).
- The most frequent word in the paper consists of alpha characters only, and it appears in less than 7.5% of the document.
- Words are obtained by splitting the text on whitespace.
The train set contains papers published before 2022-12-01;
the validation set includes documents published after 2022-12-01 and until 2023-01-03.
#### S2AG-derived documents
The S2AG corpus contains titles and abstracts of papers in Semantic Scholar.
Unfiltered, the corpus contains 91.1M papers and 15.5B whitespace-separated tokens as of 2023-01-03. To derive peS2o v1, we impose the following constraints:
- Abstract must be in English.
- To calculate the language, we once again use pycld3
- Title must be in English, or have average unigram log probability greater than -20.
- Abstract must be in English.
- Abstract must have higher than -20 average unigram log probability.
- Abstract must have at least 50 words.
- Abstract must have no more than 1000 words.
- The most frequent word in the union of text and abstract must be a 2+ character alpha word, or it can be `a` followed by a 2+ character alpha word.
- Paper was published after 1969.
#### Statistics
| Dataset | Split | # Documents | # Words |
|:-------:|:-------:|:-----------:|:--------------:|
|s2orc | train | 8,242,162 | 36,088,195,908 |
|s2orc | valid | 51,323 | 255,139,074 |
|s2ag | train | 59,382,301 | 11,009,123,378 |
|s2ag | valid | 111,228 | 24,398,512 |
[1]: https://aclanthology.org/2020.acl-main.447/
[2]: https://github.com/allenai/s2orc
|
thanhkt/manim_code | thanhkt | 2024-10-10T14:55:23Z | 74 | 4 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-10T14:55:22Z | 2 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7074758
num_examples: 4400
download_size: 2439317
dataset_size: 7074758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
IGNF/PASTIS-HD | IGNF | 2024-10-04T13:39:24Z | 28,466 | 12 | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"license:etalab-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2107.07933",
"arxiv:2112.07558",
"arxiv:2404.08351",
"region:us",
"remote sensing",
"Agricultural"
] | [
"image-classification",
"image-segmentation"
] | 2024-04-02T14:58:15Z | null | ---
license: etalab-2.0
task_categories:
- image-classification
- image-segmentation
tags:
- remote sensing
- Agricultural
size_categories:
- 1K<n<10K
---
# 🌱 PASTIS-HD 🌿 Panoptic Agricultural Satellite TIme Series : optical time series, radar time series and very high resolution image
[PASTIS](https://github.com/VSainteuf/pastis-benchmark) is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite time series.
It contains 2,433 patches within the French metropolitan territory with panoptic annotations (instance index + semantic label for each pixel).
Each patch is a Sentinel-2 multispectral image time series of variable lentgh.
This dataset have been extended in 2021 with aligned radar Sentinel-1 observations for all 2433 patches.
For each patch, it constains approximately 70 observations of Sentinel-1 in ascending orbit, and 70 observations in descending orbit. Each each Sentinel1 observation is assembled into a 3-channel image: vertical polarization (VV), horizontal polarisation (VH), and the ratio vertical over horizontal polarization (VV/VH). This extension is named PASTIS-R.
We extend PASTIS with aligned very high resolution satellite images from SPOT 6-7 constellation for all 2433 patches in addition to the Sentinel-1 and 2 time series.
The image are resampled to a 1m resolution and converted to 8 bits.
This enhancement significantly improves the dataset's spatial content, providing more granular information for agricultural parcel segmentation.
**PASTIS-HD** can be used to evaluate multi-modal fusion methods (with optical time series, radar time series and VHR images) for parcel-based classification, semantic segmentation, and panoptic segmentation.
## Dataset in numbers
🛰️ Sentinel 2 | 🛰️ Sentinel 1 | 🛰️ **SPOT 6-7 VHR** | 🗻 Annotations
:-------------------------------------------- | :-------------------------------------------------- | :------------------------------| :------------------------------
➡️ 2,433 time series | ➡️ 2 time 2,433 time series | ➡️ **2,433 images** | 124,422 individual parcels
➡️ 10m / pixel | ➡️ 10m / pixel | ➡️ **1.5m / pixel** | covers ~4,000 km²
➡️ 128x128 pixels / images | ➡️ 128x128 pixels / images | ➡️ **1280x1280 pixels / images** | over 2B pixels
➡️ 38-61 acquisitions / series | ➡️ ~ 70 acquisitions / series | ➡️ **One observation** | 18 crop types
➡️ 10 spectral bands |➡️ 2 spectral bands | ➡️ **3 spectral bands** |
⚠️ The **SPOT data are natively 1.5m resolution**, but we over-sampled them at 1m to align them pixel-perfect with Sentinel data.

## Data loading
The Github repository associated to this dataset contains a PyTorch dataset class of [the OmniSat repository](https://github.com/gastruc/OmniSat/blob/main/src/data/Pastis.py) that can be readily used to load data for training models on PASTIS-HD.
The time series contained in PASTIS have variable lengths.
The Sentinel 1 and 2 time series are stored in numpy array. The SPOT images are in TIFF format.
The annotations are in numpy array too.
⚠️ The S2 and S1 folders contains more than 2433 files on the contrary to the labels folder. Some patches are not labelled and not used for training.
The relevant information can be find in the metadata.geojson file (with 2433 entries), which is used as an index by the dataloader.
### Remark about the folder names
⚠️ The **DATA_S1A** folder contains the Sentinel-1 **ascendent** images whereas the **DATA_S1D** folder contains the Sentinel-1 **descendant** images.
## Ground Truth Annotations
The agricultural parcels are grouped into 18 different crop classes as shown in the table below. The backgroud class corresponds to non-agricultural land, and the void label for parcels that are mostly outside their patch.

Additional information about the dataset can be found in the documentation/pastis-documentation.pdf document.
## Credits
- The Sentinel imagery used in PASTIS was retrieved from [THEIA](www.theia.land.fr):
"Value-added data processed by the CNES for the Theia www.theia.land.fr data cluster using Copernicus data.
The treatments use algorithms developed by Theia’s Scientific Expertise Centres. "
- The annotations used in PASTIS stem from the French [land parcel identification system](https://www.data.gouv.fr/en/datasets/registre-parcellaire-graphique-rpg-contours-des-parcelles-et-ilots-culturaux-et-leur-groupe-de-cultures-majoritaire/) produced
by IGN.
- The SPOT images are opendata thanks to the Dataterra Dinamis initiative in the case of the ["Couverture France DINAMIS"](https://dinamis.data-terra.org/opendata/) program.
## References
If you use PASTIS please cite the [related paper](https://arxiv.org/abs/2107.07933):
```
@article{garnot2021panoptic,
title={Panoptic Segmentation of Satellite Image Time Series
with Convolutional Temporal Attention Networks},
author={Sainte Fare Garnot, Vivien and Landrieu, Loic},
journal={ICCV},
year={2021}
}
```
For the PASTIS-R optical-radar fusion dataset, please also cite [this paper](https://arxiv.org/abs/2112.07558v1):
```
@article{garnot2021mmfusion,
title = {Multi-modal temporal attention models for crop mapping from satellite time series},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
year = {2022},
doi = {https://doi.org/10.1016/j.isprsjprs.2022.03.012},
author = {Vivien {Sainte Fare Garnot} and Loic Landrieu and Nesrine Chehata},
}
```
For the PASTIS-HD with the 3 modalities optical-radar time series plus VHR images dataset, please also cite [this paper](https://arxiv.org/abs/2404.08351):
```
@article{astruc2024omnisat,
title={Omni{S}at: {S}elf-Supervised Modality Fusion for {E}arth Observation},
author={Astruc, Guillaume and Gonthier, Nicolas and Mallet, Clement and Landrieu, Loic},
journal={ECCV},
year={2024}
}
``` |
Gryphe/Sonnet3.5-SlimOrcaDedupCleaned | Gryphe | 2024-10-04T08:41:30Z | 596 | 90 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-07-14T08:25:38Z | null | ---
license: mit
size_categories:
- 100K<n<1M
---
**2024-10-04:** I fixed two issues that were affecting newlines (all double newlines were gone!) and cleaned up spaces preceding closing quotes. Many thanks to [PocketDoc](https://huggingface.co/PocketDoc) for bringing this to my attention!
A Sonnet 3.5 generated version of Caitlyn's wonderfully cleaned [SlimOrca Deduped dataset](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned), ready for training using the ShareGPT format.
As always, an effort was made to ensure no censoring was applied to the responses. If you find any refusals, let me know! |
hendrydong/preference_700K | hendrydong | 2024-09-28T16:05:26Z | 278 | 15 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2405.07863",
"region:us"
] | [] | 2024-04-18T01:43:10Z | 2 | ---
dataset_info:
features:
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected_score
dtype: float64
- name: chosen_score
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2802733004
num_examples: 700000
download_size: 1433169364
dataset_size: 2802733004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is the preference dataset used for RLHF Workflow project.
It is a mixed dataset of the following complenent,
* HH-RLHF (Bai et al., 2022a) is a pairwise preference dataset where each sample is accompanied
by a conversation history and two alternative responses written by an early Claude model with 52B
parameters. The preferences of the responses are annotated by humans.
* SHP (Ethayarajh et al., 2022) is sourced from Reddit and includes examples from 18 subreddits, such
as askacademia, askbaking, askengineers, and changemyview. Each example is a Reddit post with a
question/instruction and a pair of top-level comments. One comment is preferred by more Reddit users
than the other. All preferences and responses are provided by humans. Only samples with a score
ratio > 2 are used, and at most 5 pairs are taken for each prompt.
* HelpSteer (Wang et al., 2023). This open-source dataset (Wang et al., 2023) contains prompts,
responses, and five human-annotated attributes (helpfulness, correctness, coherence, complexity, and
verbosity) ranging from 0 to 4. The prompts are generated using a mixture of template-generated and
human-generated methods, while responses are generated by an in-house LLM. The authors generate
up to 4 responses per prompt, and we can construct pairwise comparisons based on them.
* PKU-SafeRLHF (Ji et al., 2024). This dataset (Ji et al., 2024) consists of 30k+ expert comparison
data. Each sample includes two responses to a question and two preference signals for helpfulness and
safety, respectively. The responses are generated by open-source chatbots, and the preference signals
are merged through the results of 14 harm category multi-class classficiation.
* UltraFeedback (Cui et al., 2023) consists of 64k prompts from diverse resources (including UltraChat,
ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN) and the authors generate 4 responses per
prompt using 4 different LLMs sampled from a diverse set of state-of-the-art open-source LLMs. The
preference is from GPT-4 based on a fine-grained annotation instruction, which contains 4 different
aspects, namely instruction-following, truthfulness, honesty and helpfulness. The dataset collection
strategy of UltraFeedback has also influenced many subsequent works.
* UltraInteract (Yuan et al., 2024a) is a preference dataset designed for complex reasoning tasks. The
authors collect a preference tree for each instruction, with the instruction being the root and each
action a node. A trajectory is a root-to-leaf path consisting of a sequence of actions. Paired correct
and incorrect nodes or trajectories are used for preference learning.
* Distilabel-Capybara is a preference dataset of multi-turn dialogues whose prompts are taken from
Daniele and Suphavadeeprasit (2023), where the responses are generated by open-source LLMs and
preferences are generated by GPT-4.
* Distilabel-Orca is collected similarly with Capybara but with the prompts from Lian et al. (2023a).
## Reference
If you found it useful, please cite
```bibtex
@misc{dong2024rlhf,
title={RLHF Workflow: From Reward Modeling to Online RLHF},
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
year={2024},
eprint={2405.07863},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
MichaelR207/enron_qa_0922 | MichaelR207 | 2024-09-22T17:42:07Z | 22 | 2 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-09-22T17:22:22Z | 2 | ---
dataset_info:
features:
- name: email
dtype: string
- name: questions
sequence: string
- name: rephrased_questions
sequence: string
- name: gold_answers
sequence: string
- name: alternate_answers
sequence:
sequence: string
- name: incorrect_answers
sequence:
sequence: string
- name: path
dtype: string
- name: user
dtype: string
- name: questions_count
dtype: int64
- name: gold_rationales
sequence: string
- name: alternate_rationales
sequence:
sequence: string
- name: include_email
sequence: int64
splits:
- name: train
num_bytes: 730246275
num_examples: 73772
- name: dev
num_bytes: 347395653
num_examples: 73772
- name: test
num_bytes: 319859974
num_examples: 73772
download_size: 666346042
dataset_size: 1397501902
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
|
jackyhate/text-to-image-2M | jackyhate | 2024-09-22T09:38:54Z | 12,740 | 100 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-classification",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"doi:10.57967/hf/3066",
"region:us"
] | [
"text-to-image",
"image-to-text",
"image-classification"
] | 2024-09-11T14:02:35Z | null | ---
license: mit
task_categories:
- text-to-image
- image-to-text
- image-classification
language:
- en
size_categories:
- 1M<n<10M
---
# text-to-image-2M: A High-Quality, Diverse Text-to-Image Training Dataset
## Overview
`text-to-image-2M` is a curated text-image pair dataset designed for fine-tuning text-to-image models. The dataset consists of approximately 2 million samples, carefully selected and enhanced to meet the high demands of text-to-image model training. The motivation behind creating this dataset stems from the observation that datasets with over 1 million samples tend to produce better fine-tuning results. However, existing publicly available datasets often have limitations:
- **Image Understanding Datasets**: Not guarantee the quality of image.
- **Informal collected or Task-Specific Datasets**: Not category balanced or lacks diversity.
- **Size Constraints**: Available datasets are either too small or too large. (subset sampled from large datasets often lack diversity.)
To address these issues, we combined and enhanced existing high-quality datasets using state-of-the-art text-to-image and captioning models to create `text-to-image-2M`. This includes data_512_2M, a 2M 512x512 fine-tuning dataset and data_1024_10K, a 10K high-quality, high-resolution dataset (for high-resolution adaptation).
## Dataset Composition
### data_512_2M
The dataset is composed of several high-quality subsets, as detailed below:
| **Source** | **Samples** | **Prompts** | **Images** |
|-------------------------------------------------|-------------|--------------------------------------|---------------------------------------------|
| [**LLaVA-next fine-tuning dataset**](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data) | ~700K | Re-captioned using Qwen2-VL | Original images |
| [**LLaVA-pretrain dataset**](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) | ~500K | Original prompts | Images generated by Flux-dev |
| [**ProGamerGov synthetic dataset (DALL·E 3)**](https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions) | ~900K | Filtered for validity | Center-cropped and validity-filtered images |
| **GPT-4o generated dataset** | 100K | Generated by GPT-4o | Images generated by Flux-dev |
### data_1024_10K
10K images generated by Flux-dev with prompts generated by GPT-4o
## **Usage**:
The dataset uses the [WebDataset](https://github.com/webdataset/webdataset) format and can be easily accessed and used with HuggingFace's datasets library like so:
```py
from datasets import load_dataset
base_url = "https://huggingface.co/datasets/jackyhate/text-to-image-2M/resolve/main/data_512_2M/data_{i:06d}.tar"
num_shards = 46 # Number of webdataset tar files
urls = [base_url.format(i=i) for i in range(num_shards)]
dataset = load_dataset("webdataset", data_files={"train": urls}, split="train", streaming=True)
# Example of iterating through the dataset
for image in dataset:
print(image) # single image in row with associated columns
break
```
* Note that as long as `streaming=True` in the above example, the dataset does not have to be downloaded in full.
## Acknowledgments
This dataset builds on the work of several open-source projects, including:
- [**LLaVA-next fine-tuning dataset**](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data)
- [**LLaVA-pretrain dataset**](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain)
- [**ProGamerGov synthetic dataset (DALL·E 3)**](https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions)
- **GPT-4o**
- **Flux-1.0-dev**
We thank the contributors of these datasets and models for making this project possible. |
mlfoundations/MINT-1T-HTML | mlfoundations | 2024-09-21T01:50:16Z | 234,694 | 83 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | 2024-07-21T06:48:51Z | null | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
configs:
- config_name: data-v1.1
data_files:
- split: train
path: data_v1_1/*.parquet
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing the HTML subset of 🍃 MINT-1T. For PDF and ArXiv subsets, please refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

## Updates
### 9/7/24
We have improved MINT-1T (HTML) by removing boilerplate from the header and footer of each document. This new version of the data can be found in directory `data_v1_1` and contains 742B text tokens. The previous version of the data can be found in directory `data_v1_0`.
### 8/8/24
We have updated MINT-1T (HTML) with fixed document URL filtering and additional image safety filtering. As we prioritize safety, we have decided to only release the HTML data from MINT-1T that passes a rigorous image filtering pipeline; we run an additional image safety classifier, the one created by [Datacomp](https://www.datacomp.ai/dcclip/index.html#home), on data already filtered by our [original NSFW image classifier](https://github.com/GantMan/nsfw_model). The newly released MINT-1T (HTML) contains 792B text tokens and 905M documents.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
mlfoundations/MINT-1T-ArXiv | mlfoundations | 2024-09-19T21:32:59Z | 13,702 | 47 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | 2024-06-29T23:50:55Z | null | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing the ArXiv subset of 🍃 MINT-1T. For HTML and PDF subsets, please refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
mlfoundations/MINT-1T-PDF-CC-2023-14 | mlfoundations | 2024-09-19T21:07:39Z | 63,544 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | 2024-07-12T05:44:44Z | null | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-14`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
mlfoundations/MINT-1T-PDF-CC-2023-40 | mlfoundations | 2024-09-19T21:06:59Z | 52,154 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100B<n<1T",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | 2024-07-12T05:43:23Z | null | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-40`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
mlfoundations/MINT-1T-PDF-CC-2023-50 | mlfoundations | 2024-09-19T21:06:23Z | 28,482 | 3 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | 2024-07-12T05:42:22Z | null | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-50`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
mlfoundations/MINT-1T-PDF-CC-2024-18 | mlfoundations | 2024-09-19T21:02:55Z | 41,126 | 19 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100B<n<1T",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | 2024-07-15T03:19:33Z | null | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
configs:
- config_name: default
data_files:
- split: train
path: CC-MAIN-*/*
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2024-18`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).

## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co/HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co/facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
MMMU/MMMU | MMMU | 2024-09-19T17:11:03Z | 17,607 | 251 | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2311.16502",
"region:us",
"biology",
"medical",
"finance",
"chemistry",
"music",
"art",
"art_theory",
"design",
"business",
"accounting",
"economics",
"manage",
"marketing",
"health",
"medicine",
"basic_medical_science",
"clinical",
"pharmacy",
"public_health",
"humanities",
"social_science",
"history",
"literature",
"sociology",
"psychology",
"science",
"geography",
"math",
"physics",
"engineering",
"agriculture",
"architecture",
"computer_science",
"electronics",
"energy_and_power",
"materials",
"mechanical_engineering"
] | [
"question-answering",
"visual-question-answering",
"multiple-choice"
] | 2023-11-27T17:52:01Z | null | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- visual-question-answering
- multiple-choice
pretty_name: mmmu
dataset_info:
- config_name: Accounting
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262599.0
num_examples: 5
- name: validation
num_bytes: 1598285.0
num_examples: 30
- name: test
num_bytes: 22135625.0
num_examples: 380
download_size: 37363379
dataset_size: 23996509.0
- config_name: Agriculture
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 22082656.0
num_examples: 5
- name: validation
num_bytes: 119217558.0
num_examples: 30
- name: test
num_bytes: 993664077.0
num_examples: 287
download_size: 1158036990
dataset_size: 1134964291.0
- config_name: Architecture_and_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 137750.0
num_examples: 5
- name: validation
num_bytes: 721378.0
num_examples: 30
- name: test
num_bytes: 16054607.0
num_examples: 551
download_size: 48763955
dataset_size: 16913735.0
- config_name: Art
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 6241184.0
num_examples: 5
- name: validation
num_bytes: 29934534.0
num_examples: 30
- name: test
num_bytes: 237801390.0
num_examples: 231
download_size: 585798641
dataset_size: 273977108.0
- config_name: Art_Theory
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 7435106.0
num_examples: 5
- name: validation
num_bytes: 33481558.0
num_examples: 30
- name: test
num_bytes: 553174647.0
num_examples: 429
download_size: 930525695
dataset_size: 594091311.0
- config_name: Basic_Medical_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 814310.0
num_examples: 5
- name: validation
num_bytes: 4125930.0
num_examples: 30
- name: test
num_bytes: 48125891.0
num_examples: 326
download_size: 84666454
dataset_size: 53066131.0
- config_name: Biology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 574342.0
num_examples: 5
- name: validation
num_bytes: 8491863.0
num_examples: 30
- name: test
num_bytes: 132966151.0
num_examples: 345
download_size: 410242502
dataset_size: 142032356.0
- config_name: Chemistry
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262397.0
num_examples: 5
- name: validation
num_bytes: 1518573.0
num_examples: 30
- name: test
num_bytes: 37219529.0
num_examples: 603
download_size: 108345562
dataset_size: 39000499.0
- config_name: Clinical_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1467945.0
num_examples: 5
- name: validation
num_bytes: 10882484.0
num_examples: 30
- name: test
num_bytes: 98201863.0
num_examples: 325
download_size: 160611488
dataset_size: 110552292.0
- config_name: Computer_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 440523.0
num_examples: 5
- name: validation
num_bytes: 2072018.0
num_examples: 30
- name: test
num_bytes: 32047381.0
num_examples: 371
download_size: 55640991
dataset_size: 34559922.0
- config_name: Design
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2259873.0
num_examples: 5
- name: validation
num_bytes: 17923120.0
num_examples: 30
- name: test
num_bytes: 77676331.0
num_examples: 169
download_size: 142866617
dataset_size: 97859324.0
- config_name: Diagnostics_and_Laboratory_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2056117.0
num_examples: 5
- name: validation
num_bytes: 37106233.0
num_examples: 30
- name: test
num_bytes: 157003069.0
num_examples: 162
download_size: 603957093
dataset_size: 196165419.0
- config_name: Economics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 171434.0
num_examples: 5
- name: validation
num_bytes: 1487048.0
num_examples: 30
- name: test
num_bytes: 11852300.0
num_examples: 267
download_size: 20777635
dataset_size: 13510782.0
- config_name: Electronics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 123632.0
num_examples: 5
- name: validation
num_bytes: 641377.0
num_examples: 30
- name: test
num_bytes: 5717686.0
num_examples: 256
download_size: 11602832
dataset_size: 6482695.0
- config_name: Energy_and_Power
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 105006.0
num_examples: 5
- name: validation
num_bytes: 1641935.0
num_examples: 30
- name: test
num_bytes: 14748428.0
num_examples: 432
download_size: 35246567
dataset_size: 16495369.0
- config_name: Finance
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 296124.0
num_examples: 5
- name: validation
num_bytes: 1071060.0
num_examples: 30
- name: test
num_bytes: 12065803.0
num_examples: 355
download_size: 29551521
dataset_size: 13432987.0
- config_name: Geography
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1494060.0
num_examples: 5
- name: validation
num_bytes: 6671316.0
num_examples: 30
- name: test
num_bytes: 137218400.0
num_examples: 565
download_size: 374766631
dataset_size: 145383776.0
- config_name: History
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1444231.0
num_examples: 5
- name: validation
num_bytes: 8819857.0
num_examples: 30
- name: test
num_bytes: 115228815.0
num_examples: 278
download_size: 232549641
dataset_size: 125492903.0
- config_name: Literature
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2451201.0
num_examples: 5
- name: validation
num_bytes: 14241046.0
num_examples: 30
- name: test
num_bytes: 50301541.0
num_examples: 112
download_size: 132145895
dataset_size: 66993788.0
- config_name: Manage
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 449514.0
num_examples: 5
- name: validation
num_bytes: 3277436.0
num_examples: 30
- name: test
num_bytes: 29963963.0
num_examples: 245
download_size: 51186888
dataset_size: 33690913.0
- config_name: Marketing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 116960.0
num_examples: 5
- name: validation
num_bytes: 1472981.0
num_examples: 30
- name: test
num_bytes: 7732976.0
num_examples: 181
download_size: 13146078
dataset_size: 9322917.0
- config_name: Materials
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 239632.0
num_examples: 5
- name: validation
num_bytes: 2305223.0
num_examples: 30
- name: test
num_bytes: 25256854.0
num_examples: 458
download_size: 105773156
dataset_size: 27801709.0
- config_name: Math
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 175839.0
num_examples: 5
- name: validation
num_bytes: 1444496.0
num_examples: 30
- name: test
num_bytes: 27701845.0
num_examples: 505
download_size: 174098418
dataset_size: 29322180.0
- config_name: Mechanical_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 152542.0
num_examples: 5
- name: validation
num_bytes: 874988.0
num_examples: 30
- name: test
num_bytes: 15093746.0
num_examples: 429
download_size: 30450114
dataset_size: 16121276.0
- config_name: Music
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1417615.0
num_examples: 5
- name: validation
num_bytes: 9359372.0
num_examples: 30
- name: test
num_bytes: 134096770.0
num_examples: 334
download_size: 174725052
dataset_size: 144873757.0
- config_name: Pharmacy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 207924.0
num_examples: 5
- name: validation
num_bytes: 1656342.0
num_examples: 30
- name: test
num_bytes: 31866248.0
num_examples: 430
download_size: 62721263
dataset_size: 33730514.0
- config_name: Physics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 233734.0
num_examples: 5
- name: validation
num_bytes: 1114130.0
num_examples: 30
- name: test
num_bytes: 15905705.0
num_examples: 408
download_size: 35238571
dataset_size: 17253569.0
- config_name: Psychology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 600864.0
num_examples: 5
- name: validation
num_bytes: 4403886.0
num_examples: 30
- name: test
num_bytes: 53813915.0
num_examples: 305
download_size: 102466671
dataset_size: 58818665.0
- config_name: Public_Health
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 234781.0
num_examples: 5
- name: validation
num_bytes: 1508761.0
num_examples: 30
- name: test
num_bytes: 32150088.0
num_examples: 509
download_size: 48231609
dataset_size: 33893630.0
- config_name: Sociology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 3769220.0
num_examples: 5
- name: validation
num_bytes: 18455336.0
num_examples: 30
- name: test
num_bytes: 144301123.0
num_examples: 252
download_size: 310313826
dataset_size: 166525679.0
configs:
- config_name: Accounting
data_files:
- split: dev
path: Accounting/dev-*
- split: validation
path: Accounting/validation-*
- split: test
path: Accounting/test-*
- config_name: Agriculture
data_files:
- split: dev
path: Agriculture/dev-*
- split: validation
path: Agriculture/validation-*
- split: test
path: Agriculture/test-*
- config_name: Architecture_and_Engineering
data_files:
- split: dev
path: Architecture_and_Engineering/dev-*
- split: validation
path: Architecture_and_Engineering/validation-*
- split: test
path: Architecture_and_Engineering/test-*
- config_name: Art
data_files:
- split: dev
path: Art/dev-*
- split: validation
path: Art/validation-*
- split: test
path: Art/test-*
- config_name: Art_Theory
data_files:
- split: dev
path: Art_Theory/dev-*
- split: validation
path: Art_Theory/validation-*
- split: test
path: Art_Theory/test-*
- config_name: Basic_Medical_Science
data_files:
- split: dev
path: Basic_Medical_Science/dev-*
- split: validation
path: Basic_Medical_Science/validation-*
- split: test
path: Basic_Medical_Science/test-*
- config_name: Biology
data_files:
- split: dev
path: Biology/dev-*
- split: validation
path: Biology/validation-*
- split: test
path: Biology/test-*
- config_name: Chemistry
data_files:
- split: dev
path: Chemistry/dev-*
- split: validation
path: Chemistry/validation-*
- split: test
path: Chemistry/test-*
- config_name: Clinical_Medicine
data_files:
- split: dev
path: Clinical_Medicine/dev-*
- split: validation
path: Clinical_Medicine/validation-*
- split: test
path: Clinical_Medicine/test-*
- config_name: Computer_Science
data_files:
- split: dev
path: Computer_Science/dev-*
- split: validation
path: Computer_Science/validation-*
- split: test
path: Computer_Science/test-*
- config_name: Design
data_files:
- split: dev
path: Design/dev-*
- split: validation
path: Design/validation-*
- split: test
path: Design/test-*
- config_name: Diagnostics_and_Laboratory_Medicine
data_files:
- split: dev
path: Diagnostics_and_Laboratory_Medicine/dev-*
- split: validation
path: Diagnostics_and_Laboratory_Medicine/validation-*
- split: test
path: Diagnostics_and_Laboratory_Medicine/test-*
- config_name: Economics
data_files:
- split: dev
path: Economics/dev-*
- split: validation
path: Economics/validation-*
- split: test
path: Economics/test-*
- config_name: Electronics
data_files:
- split: dev
path: Electronics/dev-*
- split: validation
path: Electronics/validation-*
- split: test
path: Electronics/test-*
- config_name: Energy_and_Power
data_files:
- split: dev
path: Energy_and_Power/dev-*
- split: validation
path: Energy_and_Power/validation-*
- split: test
path: Energy_and_Power/test-*
- config_name: Finance
data_files:
- split: dev
path: Finance/dev-*
- split: validation
path: Finance/validation-*
- split: test
path: Finance/test-*
- config_name: Geography
data_files:
- split: dev
path: Geography/dev-*
- split: validation
path: Geography/validation-*
- split: test
path: Geography/test-*
- config_name: History
data_files:
- split: dev
path: History/dev-*
- split: validation
path: History/validation-*
- split: test
path: History/test-*
- config_name: Literature
data_files:
- split: dev
path: Literature/dev-*
- split: validation
path: Literature/validation-*
- split: test
path: Literature/test-*
- config_name: Manage
data_files:
- split: dev
path: Manage/dev-*
- split: validation
path: Manage/validation-*
- split: test
path: Manage/test-*
- config_name: Marketing
data_files:
- split: dev
path: Marketing/dev-*
- split: validation
path: Marketing/validation-*
- split: test
path: Marketing/test-*
- config_name: Materials
data_files:
- split: dev
path: Materials/dev-*
- split: validation
path: Materials/validation-*
- split: test
path: Materials/test-*
- config_name: Math
data_files:
- split: dev
path: Math/dev-*
- split: validation
path: Math/validation-*
- split: test
path: Math/test-*
- config_name: Mechanical_Engineering
data_files:
- split: dev
path: Mechanical_Engineering/dev-*
- split: validation
path: Mechanical_Engineering/validation-*
- split: test
path: Mechanical_Engineering/test-*
- config_name: Music
data_files:
- split: dev
path: Music/dev-*
- split: validation
path: Music/validation-*
- split: test
path: Music/test-*
- config_name: Pharmacy
data_files:
- split: dev
path: Pharmacy/dev-*
- split: validation
path: Pharmacy/validation-*
- split: test
path: Pharmacy/test-*
- config_name: Physics
data_files:
- split: dev
path: Physics/dev-*
- split: validation
path: Physics/validation-*
- split: test
path: Physics/test-*
- config_name: Psychology
data_files:
- split: dev
path: Psychology/dev-*
- split: validation
path: Psychology/validation-*
- split: test
path: Psychology/test-*
- config_name: Public_Health
data_files:
- split: dev
path: Public_Health/dev-*
- split: validation
path: Public_Health/validation-*
- split: test
path: Public_Health/test-*
- config_name: Sociology
data_files:
- split: dev
path: Sociology/dev-*
- split: validation
path: Sociology/validation-*
- split: test
path: Sociology/test-*
tags:
- biology
- medical
- finance
- chemistry
- music
- art
- art_theory
- design
- music
- business
- accounting
- economics
- finance
- manage
- marketing
- health
- medicine
- basic_medical_science
- clinical
- pharmacy
- public_health
- humanities
- social_science
- history
- literature
- sociology
- psychology
- science
- biology
- chemistry
- geography
- math
- physics
- engineering
- agriculture
- architecture
- computer_science
- electronics
- energy_and_power
- materials
- mechanical_engineering
---
# MMMU (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)
[**🌐 Homepage**](https://mmmu-benchmark.github.io/) | [**🏆 Leaderboard**](https://mmmu-benchmark.github.io/#leaderboard) | [**🤗 Dataset**](https://huggingface.co/datasets/MMMU/MMMU/) | [**🤗 Paper**](https://huggingface.co/papers/2311.16502) | [**📖 arXiv**](https://arxiv.org/abs/2311.16502) | [**GitHub**](https://github.com/MMMU-Benchmark/MMMU)
## 🔔News
- **🛠️[2024-05-30]: Fixed duplicate option issues in Materials dataset items (validation_Materials_25; test_Materials_17, 242) and content error in validation_Materials_25.**
- **🛠️[2024-04-30]: Fixed missing "-" or "^" signs in Math dataset items (dev_Math_2, validation_Math_11, 12, 16; test_Math_8, 23, 43, 113, 164, 223, 236, 287, 329, 402, 498) and corrected option errors in validation_Math_2. If you encounter any issues with the dataset, please contact us promptly!**
- **🚀[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!🌟**
- **🔥[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! 😆**
## Dataset Details
### Dataset Description
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes **11.5K meticulously collected multimodal questions** from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering. These questions span **30 subjects** and **183 subfields**, comprising **30 highly heterogeneous image types**, such as charts, diagrams, maps, tables, music sheets, and chemical structures. We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence (AGI).
🎯 **We have released a full set comprising 150 development samples and 900 validation samples. We have released 10,500 test questions without their answers.**
The development set is used for few-shot/in-context learning, and the validation set is used for debugging models, selecting hyperparameters, or quick evaluations. The answers and explanations for the test set questions are withheld. You can submit your model's predictions for the **test set** on **[EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview)**.

### Dataset Creation
MMMU was created to challenge multimodal models with tasks that demand college-level subject knowledge and deliberate reasoning, pushing the boundaries of what these models can achieve in terms of expert-level perception and reasoning.
The data for the MMMU dataset was manually collected by a team of college students from various disciplines, using online sources, textbooks, and lecture materials.
- **Content:** The dataset contains 11.5K college-level problems across six broad disciplines (Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, Tech & Engineering) and 30 college subjects.
- **Image Types:** The dataset includes 30 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures, interleaved with text.

## 🏆 Mini-Leaderboard
We show a mini-leaderboard here and please find more information in our paper or [**homepage**](https://mmmu-benchmark.github.io/).
| Model | Val (900) | Test (10.5K) |
|--------------------------------|:---------:|:------------:|
| Expert (Best) | 88.6 | - |
| Expert (Medium) | 82.6 | - |
| Expert (Worst) | 76.2 | - |
| GPT-4o* | **69.1** | - |
| Gemini 1.5 Pro* | 62.2 | - |
| InternVL2-Pro* | 62.0 | **55.7** |
| Gemini 1.0 Ultra* | 59.4 | - |
| Claude 3 Opus* | 59.4 | - |
| GPT-4V(ision) (Playground) | 56.8 | **55.7** |
| Reka Core* | 56.3 | - |
| Gemini 1.5 Flash* | 56.1 | - |
| SenseChat-Vision-0423-Preview* | 54.6 | 50.3 |
| Reka Flash* | 53.3 | - |
| Claude 3 Sonnet* | 53.1 | - |
| HPT Pro* | 52.0 | - |
| VILA1.5* | 51.9 | 46.9 |
| Qwen-VL-MAX* | 51.4 | 46.8 |
| InternVL-Chat-V1.2* | 51.6 | 46.2 |
| Skywork-VL* | 51.4 | 46.2 |
| LLaVA-1.6-34B* | 51.1 | 44.7 |
| Claude 3 Haiku* | 50.2 | - |
| Adept Fuyu-Heavy* | 48.3 | - |
| Gemini 1.0 Pro* | 47.9 | - |
| Marco-VL-Plus* | 46.2 | 44.3 |
| Yi-VL-34B* | 45.9 | 41.6 |
| Qwen-VL-PLUS* | 45.2 | 40.8 |
| HPT Air* | 44.0 | - |
| Reka Edge* | 42.8 | - |
| Marco-VL* | 41.2 | 40.4 |
| OmniLMM-12B* | 41.1 | 40.4 |
| Bunny-8B* | 43.3 | 39.0 |
| Bunny-4B* | 41.4 | 38.4 |
| Weitu-VL-1.0-15B* | - | 38.4 |
| InternLM-XComposer2-VL* | 43.0 | 38.2 |
| Yi-VL-6B* | 39.1 | 37.8 |
| InfiMM-Zephyr-7B* | 39.4 | 35.5 |
| InternVL-Chat-V1.1* | 39.1 | 35.3 |
| Math-LLaVA-13B* | 38.3 | 34.6 |
| SVIT* | 38.0 | 34.1 |
| MiniCPM-V* | 37.2 | 34.1 |
| MiniCPM-V-2* | 37.1 | - |
| Emu2-Chat* | 36.3 | 34.1 |
| BLIP-2 FLAN-T5-XXL | 35.4 | 34.0 |
| InstructBLIP-T5-XXL | 35.7 | 33.8 |
| LLaVA-1.5-13B | 36.4 | 33.6 |
| Bunny-3B* | 38.2 | 33.0 |
| Qwen-VL-7B-Chat | 35.9 | 32.9 |
| SPHINX* | 32.9 | 32.9 |
| mPLUG-OWL2* | 32.7 | 32.1 |
| BLIP-2 FLAN-T5-XL | 34.4 | 31.0 |
| InstructBLIP-T5-XL | 32.9 | 30.6 |
| Gemini Nano2* | 32.6 | - |
| CogVLM | 32.1 | 30.1 |
| Otter | 32.2 | 29.1 |
| LLaMA-Adapter2-7B | 29.8 | 27.7 |
| MiniGPT4-Vicuna-13B | 26.8 | 27.6 |
| Adept Fuyu-8B | 27.9 | 27.4 |
| Kosmos2 | 24.4 | 26.6 |
| OpenFlamingo2-9B | 28.7 | 26.3 |
| Frequent Choice | 22.1 | 23.9 |
| Random Choice | 26.8 | 25.8 |
*: results provided by the authors.
## Limitations
Despite its comprehensive nature, MMMU, like any benchmark, is not without limitations. The manual curation process, albeit thorough, may carry biases.
And the focus on college-level subjects might not fully be a sufficient test for Expert AGI.
However, we believe it should be necessary for an Expert AGI to achieve strong performance on MMMU to demonstrate their broad and deep subject knowledge as well as expert-level understanding and reasoning capabilities.
In future work, we plan to incorporate human evaluations into MMMU. This will provide a more grounded comparison between model capabilities and expert performance, shedding light on the proximity of current AI systems to achieving Expert AGI.
## Disclaimers
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to notify us. Upon verification, such samples will be promptly removed.
## Contact
- Xiang Yue: [email protected]
- Yu Su: [email protected]
- Wenhu Chen: [email protected]
## Citation
**BibTeX:**
```bibtex
@inproceedings{yue2023mmmu,
title={MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI},
author={Xiang Yue and Yuansheng Ni and Kai Zhang and Tianyu Zheng and Ruoqi Liu and Ge Zhang and Samuel Stevens and Dongfu Jiang and Weiming Ren and Yuxuan Sun and Cong Wei and Botao Yu and Ruibin Yuan and Renliang Sun and Ming Yin and Boyuan Zheng and Zhenzhu Yang and Yibo Liu and Wenhao Huang and Huan Sun and Yu Su and Wenhu Chen},
booktitle={Proceedings of CVPR},
year={2024},
}
``` |
IGNF/FLAIR_1_osm_clip | IGNF | 2024-09-18T09:05:07Z | 16,806 | 5 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"IGN"
] | [] | 2023-04-27T12:28:28Z | null | ---
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: segmentation
dtype: image
- name: metadata
struct:
- name: address
struct:
- name: ISO3166-2-lvl4
dtype: string
- name: ISO3166-2-lvl6
dtype: string
- name: ISO3166-2-lvl7
dtype: string
- name: aerialway
dtype: string
- name: aeroway
dtype: string
- name: allotments
dtype: string
- name: amenity
dtype: string
- name: bridge
dtype: string
- name: building
dtype: string
- name: city
dtype: string
- name: city_district
dtype: string
- name: club
dtype: string
- name: commercial
dtype: string
- name: country
dtype: string
- name: country_code
dtype: string
- name: county
dtype: string
- name: craft
dtype: string
- name: emergency
dtype: string
- name: farm
dtype: string
- name: farmyard
dtype: string
- name: hamlet
dtype: string
- name: healthcare
dtype: string
- name: highway
dtype: string
- name: historic
dtype: string
- name: house_number
dtype: string
- name: industrial
dtype: string
- name: isolated_dwelling
dtype: string
- name: junction
dtype: string
- name: landuse
dtype: string
- name: leisure
dtype: string
- name: locality
dtype: string
- name: man_made
dtype: string
- name: military
dtype: string
- name: municipality
dtype: string
- name: natural
dtype: string
- name: neighbourhood
dtype: string
- name: office
dtype: string
- name: place
dtype: string
- name: postcode
dtype: string
- name: province
dtype: string
- name: quarter
dtype: string
- name: railway
dtype: string
- name: region
dtype: string
- name: residential
dtype: string
- name: retail
dtype: string
- name: road
dtype: string
- name: shop
dtype: string
- name: square
dtype: string
- name: state
dtype: string
- name: state_district
dtype: string
- name: suburb
dtype: string
- name: tourism
dtype: string
- name: town
dtype: string
- name: village
dtype: string
- name: boundingbox
sequence: string
- name: camera
dtype: string
- name: code
dtype: string
- name: date
dtype: string
- name: dimensions
sequence: int64
- name: display_name
dtype: string
- name: domain
dtype: string
- name: lat
dtype: string
- name: latlong
sequence: float64
- name: licence
dtype: string
- name: lon
dtype: string
- name: origin
sequence: float64
- name: osm_id
dtype: int64
- name: osm_type
dtype: string
- name: patch_centroid_x
dtype: float64
- name: patch_centroid_y
dtype: float64
- name: patch_centroid_z
dtype: float64
- name: place_id
dtype: int64
- name: time
dtype: string
- name: unit_system
dtype: string
- name: zone
dtype: string
- name: class_rep
struct:
- name: class_repartition
struct:
- name: agricultural land
dtype: float64
- name: bare soil
dtype: float64
- name: brushwood
dtype: float64
- name: building
dtype: float64
- name: clear cut
dtype: float64
- name: coniferous
dtype: float64
- name: deciduous
dtype: float64
- name: greenhouse
dtype: float64
- name: herbaceous vegetation
dtype: float64
- name: impervious surface
dtype: float64
- name: ligneous
dtype: float64
- name: mixed
dtype: float64
- name: other
dtype: float64
- name: pervious surface
dtype: float64
- name: plowed land
dtype: float64
- name: snow
dtype: float64
- name: swimming pool
dtype: float64
- name: vineyard
dtype: float64
- name: water
dtype: float64
- name: prompt
dtype: string
- name: clip_label
dtype: string
- name: clip_embeddings
sequence: float32
splits:
- name: train
num_bytes: 26969627407.0
num_examples: 61712
download_size: 26975935912
dataset_size: 26969627407.0
tags:
- IGN
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "FLAIR_OSM_CLIP"
Dataset for the Seg2Sat model: https://github.com/RubenGres/Seg2Sat
Derived from [FLAIR#1](https://huggingface.co/datasets/IGNF/FLAIR) train split.
This dataset incudes the following features:
- **image**: FLAIR#1 .tif files RBG bands converted into a more managable jpg format
- **segmentation**: FLAIR#1 segmentation converted to JPG using the [LUT from the documentation](https://ignf.github.io/FLAIR/index_fr.html)
- **metadata**: OSM metadata for the centroid of the image
- **clip_label**: [CLIP ViT-H](https://github.com/openai/CLIP) description
- **class_rep**: ratio of appearance of each class in the segmentation
- **prompt**: Prompt used for the Seg2Sat model
- **clip_embeddings**: Embedding using openai/clip-vit-base-patch32 model
<!---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
---> |
Prithvi-raj-singh/pricer-data | Prithvi-raj-singh | 2024-09-13T12:36:06Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-02T10:26:28Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: price
dtype: float64
splits:
- name: train
num_bytes: 313746593
num_examples: 400000
- name: test
num_bytes: 1558981
num_examples: 2000
download_size: 185915273
dataset_size: 315305574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sail/regmix-data | sail | 2024-09-12T08:12:18Z | 24,231 | 4 | [
"language:en",
"license:mit",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2407.01492",
"region:us",
"regmix"
] | [] | 2024-07-02T13:23:56Z | null | ---
license: mit
language:
- en
tags:
- regmix
pretty_name: regmix-data
size_categories:
- 10M<n<100M
---
# RegMix Data
## Dataset Description
The RegMix Data is a curated dataset derived from the Pile-Uncopyrighted, specifically designed for the RegMix paper (https://huggingface.co/papers/2407.01492). This dataset aims to facilitate the automatic identification of high-performing data mixtures for language model pre-training by formulating it as a regression task.
### Key Features:
- Size: Approximately 1TB disk space, 250B tokens
- Distribution: Follows the natural token distribution of domain examples
- Organization: Examples from different domains are separated into individual files
## Dataset Structure
The dataset is organized into two main directories: `train` and `valid`, each containing domain-specific JSONL files. The file naming convention is as follows:
```
[domain]-[identifier]-[number].jsonl
```
For example: `arxiv-10-74305611.jsonl`
### Domains Included:
arxiv, gutenberg_pg_19, pubmed_central, dm_mathematics, hackernews, stackexchange, enron_emails, nih_exporter, ubuntu_irc, europarl, philpapers, uspto_backgrounds, freelaw, pile_cc, wikipedia_en, github, pubmed_abstracts
## Usage
We recommend downloading the entire dataset snapshot instead of using the traditional `load_dataset` function, as the RegMix code is integrated with the [TinyLlama framework](https://github.com/jzhang38/TinyLlama).
To download the dataset:
```python
from huggingface_hub import snapshot_download
LOCAL_DIR = "regmix-data"
snapshot_download(repo_id="sail/regmix-data",
repo_type='dataset',
local_dir=LOCAL_DIR,
local_dir_use_symlinks=False)
```
This will download the entire snapshot, containing 34 JSON line files (17 for train, and 17 for valid), to your specified local directory.
## Data Preprocessing
Our [code](https://github.com/sail-sg/regmix) will preprocess these domain files into binary format with domain prefixes. It allows for random sampling of the dataset using user-defined data mixtures (i.e., domain weights).
## Acknowledgements
We extend our gratitude to the creators of the [Pile-Uncopyrighted dataset](https://huggingface.co/datasets/monology/pile-uncopyrighted) for their efforts in removing copyrighted content from the original Pile dataset, making this work possible.
## Citation
If you use this dataset in your research, please cite the RegMix paper:
```
@article{liu2024regmix,
title={RegMix: Data Mixture as Regression for Language Model Pre-training},
author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min},
journal={arXiv preprint arXiv:2407.01492},
year={2024}
}
```
For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492). |
LIUM/tedlium | LIUM | 2024-09-10T18:27:41Z | 1,048 | 25 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"size_categories:10K<n<100K",
"region:us"
] | [
"automatic-speech-recognition"
] | 2022-05-11T12:47:06Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: []
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: TED-LIUM
---
# Dataset Card for tedlium
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [TED-LIUM homepage](https://www.openslr.org/7/)
- **Repository:** [Needs More Information]
- **Paper:** [TED-LIUM: an Automatic Speech Recognition dedicated corpus](https://aclanthology.org/L12-1405/)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-tedlium)
- **Point of Contact:** [Sanchit Gandhi](mailto:[email protected])
### Dataset Summary
The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. The three releases of the corpus range from 118 to 452 hours of transcribed speech data.
### Example
```python
from datasets import load_dataset
tedlium = load_dataset("LIUM/tedlium", "release1") # for Release 1
# see structure
print(tedlium)
# load audio sample on the fly
audio_input = tedlium["train"][0]["audio"] # first decoded audio sample
transcription = tedlium["train"][0]["text"] # first transcription
```
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-tedlium that ranks models based on their WER.
### Languages
The audio and transcriptions are in English, as per the TED talks at http://www.ted.com.
## Dataset Structure
### Data Instances
```
{'audio': {'path': '/home/sanchitgandhi/cache/downloads/extracted/6e3655f9e735ae3c467deed1df788e0dabd671c1f3e2e386e30aa3b571bd9761/TEDLIUM_release1/train/sph/PaulaScher_2008P.sph',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'text': '{COUGH} but <sil> i was so {COUGH} utterly unqualified for(2) this project and {NOISE} so utterly ridiculous {SMACK} and ignored the brief {SMACK} <sil>',
'speaker_id': 'PaulaScher_2008P',
'gender': 'female',
'file': '/home/sanchitgandhi/cache/downloads/extracted/6e3655f9e735ae3c467deed1df788e0dabd671c1f3e2e386e30aa3b571bd9761/TEDLIUM_release1/train/sph/PaulaScher_2008P.sph',
'id': 'PaulaScher_2008P-1003.35-1011.16-<o,f0,female>'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- file: A path to the downloaded audio file in .sph format.
- text: the transcription of the audio file.
- gender: the gender of the speaker. One of: male, female or N/A.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Splits
There are three releases for the TED-LIUM corpus, progressively increasing the number of transcribed speech training data from 118 hours (Release 1), to 207 hours (Release 2), to 452 hours (Release 3).
Release 1:
- 774 audio talks and automatically aligned transcriptions.
- Contains 118 hours of speech audio data.
- Homepage: https://www.openslr.org/7/
Release 2:
- 1495 audio talks and automatically aligned transcriptions.
- Contains 207 hours of speech audio data.
- Dictionary with pronunciations (159848 entries).
- Selected monolingual data for language modeling from WMT12 publicly available corpora.
- Homepage: https://www.openslr.org/19/
Release 3:
- 2351 audio talks and automatically aligned transcriptions.
- Contains 452 hours of speech audio data.
- TED-LIUM 2 validation and test data: 19 TED talks with their corresponding manual transcriptions.
- Dictionary with pronunciations (159848 entries), the same file as the one included in TED-LIUM 2.
- Selected monolingual data for language modeling from WMT12 publicly available corpora: these files come from the TED-LIUM 2 release, but have been modified to produce a tokenization more relevant for English language.
- Homepage: https://www.openslr.org/51/
Release 3 contains two different corpus distributions:
- The ‘legacy’ one, on which the dev and test datasets are the same as in TED-LIUM 2 (and TED-LIUM 1).
- The ‘speaker adaptation’ one, specially designed for experiments on speaker adaptation.
Each release is split into a training, validation and test set:
| Split | Release 1 | Release 2 | Release 3 |
|------------|-----------|-----------|-----------|
| Train | 56,803 | 92,973 | 268,263 |
| Validation | 591 | 591 | 591 |
| Test | 1,469 | 1,469 | 1,469 |
## Dataset Creation
### Curation Rationale
TED-LIUM was built during [The International Workshop on Spoken Language Trans- lation (IWSLT) 2011 Evaluation Campaign](https://aclanthology.org/2011.iwslt-evaluation.1/), an annual workshop focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from publicly available TED talks at http://www.ted.com. Proper alignments between the speech and the transcribed text were generated using an in-house speaker segmentation and clustering tool (_LIUM_SpkDiarization_). Speech disfluencies (e.g. repetitions, hesitations, false starts) were treated in the following way: repetitions were transcribed, hesitations mapped to a specific filler word, and false starts not taken into account. For full details on the data collection and processing, refer to the [TED-LIUM paper](https://aclanthology.org/L12-1405/).
#### Who are the source language producers?
TED Talks are influential videos from expert speakers on education, business, science, tech and creativity.
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Licensed under Creative Commons BY-NC-ND 3.0 (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en).
### Citation Information
Release 1:
```
@inproceedings{rousseau2012tedlium,
title={TED-LIUM: an Automatic Speech Recognition dedicated corpus},
author={Rousseau, Anthony and Del{\'e}glise, Paul and Est{\`e}ve, Yannick},
booktitle={Conference on Language Resources and Evaluation (LREC)},
pages={125--129},
year={2012}
}
```
Release 2:
```
@inproceedings{rousseau2014enhancing,
title={Enhancing the TED-LIUM corpus with selected data for language modeling and more TED talks.},
author={Rousseau, Anthony and Del{\'e}glise, Paul and Esteve, Yannick and others},
booktitle={LREC},
pages={3935--3939},
year={2014}
}
```
Release 3:
```
@inproceedings{hernandez2018ted,
author="Hernandez, Fran{\c{c}}ois
and Nguyen, Vincent
and Ghannay, Sahar
and Tomashenko, Natalia
and Est{\`e}ve, Yannick",
title="TED-LIUM 3: Twice as Much Data and Corpus Repartition for Experiments on Speaker Adaptation",
booktitle="Speech and Computer",
year="2018",
publisher="Springer International Publishing",
pages="198--208",
}
``` |
google/xtreme_s | google | 2024-09-10T13:12:26Z | 2,557 | 62 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:extended|multilingual_librispeech",
"source_datasets:extended|covost2",
"language:afr",
"language:amh",
"language:ara",
"language:asm",
"language:ast",
"language:azj",
"language:bel",
"language:ben",
"language:bos",
"language:cat",
"language:ceb",
"language:cmn",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:spa",
"language:est",
"language:fas",
"language:ful",
"language:fin",
"language:tgl",
"language:fra",
"language:gle",
"language:glg",
"language:guj",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ind",
"language:ibo",
"language:isl",
"language:ita",
"language:jpn",
"language:jav",
"language:kat",
"language:kam",
"language:kea",
"language:kaz",
"language:khm",
"language:kan",
"language:kor",
"language:ckb",
"language:kir",
"language:ltz",
"language:lug",
"language:lin",
"language:lao",
"language:lit",
"language:luo",
"language:lav",
"language:mri",
"language:mkd",
"language:mal",
"language:mon",
"language:mar",
"language:msa",
"language:mlt",
"language:mya",
"language:nob",
"language:npi",
"language:nld",
"language:nso",
"language:nya",
"language:oci",
"language:orm",
"language:ory",
"language:pan",
"language:pol",
"language:pus",
"language:por",
"language:ron",
"language:rus",
"language:bul",
"language:snd",
"language:slk",
"language:slv",
"language:sna",
"language:som",
"language:srp",
"language:swe",
"language:swh",
"language:tam",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:ukr",
"language:umb",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yor",
"language:yue",
"language:zul",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"arxiv:2203.10752",
"arxiv:2205.12446",
"arxiv:2007.10310",
"region:us",
"speech-recognition"
] | [
"automatic-speech-recognition"
] | 2022-03-04T14:10:40Z | 1 | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|multilingual_librispeech
- extended|covost2
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: librispeech-1
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from
10+ language families, 3 different domains and 4 task families: speech recognition,
translation, classification and retrieval.'
tags:
- speech-recognition
---
# XTREME-S
## Dataset Description
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s)
- **Paper:** [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752)
- **Leaderboard:** [TODO(PVP)]()
- **FLEURS amount of disk used:** 350 GB
- **Multilingual Librispeech amount of disk used:** 2700 GB
- **Voxpopuli amount of disk used:** 400 GB
- **Covost2 amount of disk used:** 70 GB
- **Minds14 amount of disk used:** 5 GB
- **Total amount of disk used:** ca. 3500 GB
The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.
***TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code.
An easy-to-use and flexible fine-tuning script is provided and actively maintained.***
XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Design principles
### Diversity
XTREME-S aims for task, domain and language
diversity. Tasks should be diverse and cover several domains to
provide a reliable evaluation of model generalization and
robustness to noisy naturally-occurring speech in different
environments. Languages should be diverse to ensure that
models can adapt to a wide range of linguistic and phonological
phenomena.
### Accessibility
The sub-dataset for each task can be downloaded
with a **single line of code** as shown in [Supported Tasks](#supported-tasks).
Each task is available under a permissive license that allows the use and redistribution
of the data for research purposes. Tasks have been selected based on their usage by
pre-existing multilingual pre-trained models, for simplicity.
### Reproducibility
We produce fully **open-sourced, maintained and easy-to-use** fine-tuning scripts
for each task as shown under [Fine-tuning Example](#fine-tuning-and-evaluation-example).
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
## Fine-tuning and Evaluation Example
We provide a fine-tuning script under [**research-projects/xtreme-s**](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s).
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any [Hugging Face model](https://huggingface.co/models) on XTREME-S.
The example script is actively maintained by [@anton-l](https://github.com/anton-l) and [@patrickvonplaten](https://github.com/patrickvonplaten). Feel free
to reach out via issues or pull requests on GitHub if you have any questions.
## Leaderboards
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
## Supported Tasks
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are **not** evaluated.
<p align="center">
<img src="https://github.com/patrickvonplaten/scientific_images/raw/master/xtreme_s.png" alt="Datasets used in XTREME"/>
</p>
### 1. Speech Recognition (ASR)
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
#### FLEURS-ASR
*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
#### Multilingual LibriSpeech (MLS)
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
```py
from datasets import load_dataset
mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish
# to download all data for multi-lingual fine-tuning uncomment following line
# mls = load_dataset("google/xtreme_s", "mls.all")
# see structure
print(mls)
# load audio sample on the fly
audio_input = mls["train"][0]["audio"] # first decoded audio sample
transcription = mls["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### VoxPopuli
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
**VoxPopuli has to download the whole dataset 100GB since languages
are entangled into each other - maybe not worth testing here due to the size**
```py
from datasets import load_dataset
voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian
# to download all data for multi-lingual fine-tuning uncomment following line
# voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all")
# see structure
print(voxpopuli)
# load audio sample on the fly
audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample
transcription = voxpopuli["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### (Optionally) BABEL
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as")
```
**The above command is expected to fail with a nice error message,
explaining how to download BABEL**
The following should work:
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip")
# see structure
print(babel)
# load audio sample on the fly
audio_input = babel["train"][0]["audio"] # first decoded audio sample
transcription = babel["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
### 2. Speech Translation (ST)
We include the CoVoST-2 dataset for automatic speech translation.
#### CoVoST-2
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
```py
from datasets import load_dataset
covost_2 = load_dataset("google/xtreme_s", "covost2.id.en") # for Indonesian to English
# to download all data for multi-lingual fine-tuning uncomment following line
# covost_2 = load_dataset("google/xtreme_s", "covost2.all")
# see structure
print(covost_2)
# load audio sample on the fly
audio_input = covost_2["train"][0]["audio"] # first decoded audio sample
transcription = covost_2["train"][0]["transcription"] # first transcription
translation = covost_2["train"][0]["translation"] # first translation
# use audio_input and translation to fine-tune your model for AST
```
### 3. Speech Classification
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
#### Language Identification - FLEURS-LangID
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
#### Intent classification - Minds-14
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
```py
from datasets import load_dataset
minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("google/xtreme_s", "minds14.all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 4. (Optionally) Speech Retrieval
We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the [FLEURS paper](https://arxiv.org/abs/2205.12446).
#### FLEURS-Retrieval
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-structure)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-structure)
Note that for MLS, XTREME-S uses `path` instead of `file` and `transcription` instead of `text`.
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-structure)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-structure)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-structure)
Note that for Covost2, XTREME-S uses `path` instead of `file` and `transcription` instead of `sentence`.
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-structure)
Please click on the link of the dataset cards to get more information about its dataset structure.
## Dataset Creation
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-creation)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-creation)
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-creation)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-creation)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-creation)
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-creation)
Please visit the corresponding dataset cards to get more information about the source data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.
### Other Known Limitations
The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
#### XTREME-S
```
@article{conneau2022xtreme,
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
journal={arXiv preprint arXiv:2203.10752},
year={2022}
}
```
#### MLS
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
#### VoxPopuli
```
@article{wang2021voxpopuli,
title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation},
author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel},
journal={arXiv preprint arXiv:2101.00390},
year={2021}
}
```
#### CoVoST 2
```
@article{DBLP:journals/corr/abs-2007-10310,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino},
title = {CoVoST 2: {A} Massively Multilingual Speech-to-Text Translation Corpus},
journal = {CoRR},
volume = {abs/2007.10310},
year = {2020},
url = {https://arxiv.org/abs/2007.10310},
eprinttype = {arXiv},
eprint = {2007.10310},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-10310.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Minds14
```
@article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@anton-l](https://github.com/anton-l), [@aconneau](https://github.com/aconneau) for adding this dataset
|
allenai/reward-bench | allenai | 2024-09-09T16:11:45Z | 7,408 | 94 | [
"task_categories:question-answering",
"language:en",
"license:odc-by",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2403.13787",
"arxiv:2310.07641",
"arxiv:2308.01263",
"arxiv:2308.07124",
"doi:10.57967/hf/2457",
"region:us"
] | [
"question-answering"
] | 2023-12-14T18:38:07Z | null | ---
language:
- en
license: odc-by
size_categories:
- 1K<n<10K
task_categories:
- question-answering
pretty_name: RM Bench
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: chosen_model
dtype: string
- name: rejected
dtype: string
- name: rejected_model
dtype: string
- name: subset
dtype: string
- name: id
dtype: int64
splits:
- name: raw
num_bytes: 10837043
num_examples: 5123
- name: filtered
num_bytes: 4849207
num_examples: 2985
download_size: 7943951
dataset_size: 15686250
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: filtered
path: data/filtered-*
---
<img src="https://huggingface.co/spaces/allenai/reward-bench/resolve/main/src/logo.png" alt="RewardBench Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
[Code](https://github.com/allenai/reward-bench) | [Leaderboard](https://huggingface.co/spaces/allenai/reward-bench) | [Prior Preference Sets](https://huggingface.co/datasets/allenai/pref-test-sets) | [Results](https://huggingface.co/datasets/allenai/reward-bench-results) | [Paper](https://arxiv.org/abs/2403.13787)
# Reward Bench Evaluation Dataset Card
The RewardBench evaluation dataset evaluates capabilities of reward models over the following categories:
1. **Chat**: Includes the easy chat subsets (alpacaeval-easy, alpacaeval-length, alpacaeval-hard, mt-bench-easy, mt-bench-medium)
2. **Chat Hard**: Includes the hard chat subsets (mt-bench-hard, llmbar-natural, llmbar-adver-neighbor, llmbar-adver-GPTInst, llmbar-adver-GPTOut, llmbar-adver-manual)
3. **Safety**: Includes the safety subsets (refusals-dangerous, refusals-offensive, xstest-should-refuse, xstest-should-respond, do not answer)
4. **Reasoning**: Includes the code and math subsets (math-prm, hep-cpp, hep-go, hep-java, hep-js, hep-python, hep-rust)
The RewardBench leaderboard averages over these subsets and a final category from [prior preference data test sets](https://huggingface.co/datasets/allenai/preference-test-sets) including Anthropic Helpful, Anthropic HHH in BIG-Bench, Stanford Human Preferences (SHP), and OpenAI's Learning to Summarize data.
The scoring for RewardBench compares the score of a prompt-chosen pair to a prompt-rejected pair.
Success is when the chosen score is higher than rejected.
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/reward-bench/scoring.png" alt="RewardBench Scoring" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
In order to create a representative, single evaluation score, we perform a limited mixture of averaging across results.
For all the subsets detailed below except for Reasoning, we perform per-prompt weighted averaging across all the prompts in the subset to get the section score.
For example, in Chat we take a weighted average of the AlpacaEval and MT Bench sets based on the number of prompts.
For Reasoning, we increase the weight of the PRM-Math subset so code and math abilities are weighed equally in the final number, rather than increasing the relevance of code.
Once all subsets weighted averages are achieved, the final RewardBench score is the average across the subset scores (including Prior Sets).
## Dataset Details
In order to maintain all the relevant data, the samples in the dataset will have the following items.
Note, the dataset is single-turn:
* `prompt` (`str`): the instruction given in the various test sets.
* `chosen` (`str`): the response from the better model or the better rated prompt.
* `chosen_model` (`str`): where applicable
* `rejected` (`str`): the response with the lower score or from word model.
* `rejected_model` (`str`): where applicable
* `subset` (`str`): the subset (e.g. alpacaeval-easy) of the associated prompt as the dataset is all in one split.
* `id` (`int`): an incremented id for every prompt in the benchmark.
To select a specific subset use HuggingFace Datasets `.filter` functionality.
```
dataset = dataset.filter(lambda ex: ex["subset"] == "alpacaeval-easy")
```
This can easily be converted to the standard chosen/rejected list of messages format (see [UltraFeedback for an example](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned)), for example with our data loading utilities on [GitHub](https://github.com/allenai/reward-bench/blob/8eadb09397d58f1930d4f77938e618b9f9b8aeb3/rewardbench/utils.py#L330).
### Subset Summary
Total number of the prompts is: 2985.
| Subset | Num. Samples (Pre-filtering, post-filtering) | Description |
| :---------- | :-----: | :---------: |
| alpacaeval-easy | 805, 100 | Great model vs poor model; GPT4-Turbo 97.7% v. Alpaca 7b 26.46% (data [here](https://github.com/tatsu-lab/alpaca_eval/tree/main/results)) |
| alpacaeval-length | 805, 95 | Good model vs low model, similar length; Llama2chat 70B 92.66% vs Guanaco 13B 52.61% (data [here](https://github.com/tatsu-lab/alpaca_eval/tree/main/results)) |
| alpacaeval-hard | 805, 95 | Great model vs baseline model; Tulu 2 95.0% v. Davinici003 50.0% (data [here](https://github.com/tatsu-lab/alpaca_eval/tree/main/results))|
| mt-bench-easy | 28, 28 | MT Bench 10s vs 1s (source [data](https://huggingface.co/spaces/lmsys/mt-bench/tree/main/data/mt_bench)) |
| mt-bench-medium | 45, 40 | MT Bench 9s vs 2-5s (source [data](https://huggingface.co/spaces/lmsys/mt-bench/tree/main/data/mt_bench)) |
| mt-bench-hard | 45, 37 | MT Bench 7-8 vs 5-6 (source [data](https://huggingface.co/spaces/lmsys/mt-bench/tree/main/data/mt_bench)) |
| refusals-dangerous | 505, 100 | Dangerous rejected response vs polite chosen refusal |
| refusals-offensive | 704, 100 | Offensive rejected response vs polite chosen refusal |
| llmbar-natural | 100 | Manually curated instruction pairs (See [paper](https://arxiv.org/abs/2310.07641)) |
| llmbar-adver-neighbor | 134 | Adversarial instruction response vs. off-topic prompt response (See [paper](https://arxiv.org/abs/2310.07641))|
| llmbar-adver-GPTInst | 92 | Adversarial instruction response vs. GPT4 generated off-topic prompt response (See [paper](https://arxiv.org/abs/2310.07641))|
| llmbar-adver-GPTOut | 47 | Adversarial instruction response vs. unhelpful-prompted GPT4 responses (See [paper](https://arxiv.org/abs/2310.07641))|
| llmbar-adver-manual | 46 | Challenge set manually designed chosen vs. rejected |
| xstest-should-refuse | 450, 154 | False response dataset (see [paper](https://arxiv.org/abs/2308.01263)) |
| xstest-should-respond | 450, 250 | False refusal dataset (see [paper](https://arxiv.org/abs/2308.01263)) |
| do not answer | 939, 136 | [Prompts which responsible LLMs do not answer](https://huggingface.co/datasets/LibrAI/do-not-answer): Refusals are chosen and responses are rejected |
| hep-cpp | 164 | C++ working code vs. buggy code (See [dataset](https://huggingface.co/datasets/bigcode/humanevalpack) or [paper](https://arxiv.org/abs/2308.07124)) |
| hep-go | 164 | Go working code vs. buggy code |
| hep-java | 164 | Java working code vs. buggy code |
| hep-js | 164 | Javascript working code vs. buggy code |
| hep-python | 164 | Python working code vs. buggy code |
| hep-rust | 164 | Rust working code vs. buggy code |
| math-prm | 447 | Human references vs. model error (see [paper](https://github.com/openai/prm800k)) |
The length distribution of the subsets with a Llama tokenizer is shown below.
| subset | Chosen Mean Tokens | Rejected Mean Tokens | Chosen Max Tokens | Rejected Max Tokens | Chosen Min Tokens | Rejected Min Tokens | Chosen Mean Unique Tokens | Rejected Mean Unique Tokens | Chosen Max Unique Tokens | Rejected Max Unique Tokens | Chosen Min Unique Tokens | Rejected Min Unique Tokens |
|-----------------------|----------------------|------------------------|---------------------|-----------------------|---------------------|-----------------------|-----------------------------|-------------------------------|----------------------------|------------------------------|----------------------------|------------------------------|
| alpacaeval-easy | 591.26 | 167.33 | 1332 | 1043 | 40 | 15 | 252.91 | 83.44 | 630 | 290 | 33 | 12 |
| alpacaeval-hard | 411.684 | 136.926 | 1112 | 711 | 57 | 12 | 172.537 | 70.9684 | 359 | 297 | 45 | 8 |
| alpacaeval-length | 510.589 | 596.895 | 1604 | 2242 | 55 | 52 | 192.442 | 188.547 | 434 | 664 | 30 | 38 |
| donotanswer | 169.61 | 320.5 | 745 | 735 | 20 | 20 | 103.743 | 156.941 | 358 | 337 | 18 | 13 |
| hep-cpp | 261.262 | 259.488 | 833 | 835 | 53 | 57 | 99.8537 | 99.372 | 201 | 201 | 37 | 40 |
| hep-go | 266.22 | 264.598 | 732 | 720 | 55 | 57 | 99.622 | 99.189 | 201 | 201 | 36 | 37 |
| hep-java | 263.14 | 260.939 | 748 | 733 | 55 | 54 | 102.311 | 101.927 | 207 | 206 | 39 | 41 |
| hep-js | 251.165 | 249.695 | 771 | 774 | 53 | 52 | 93.2744 | 92.9268 | 192 | 192 | 37 | 40 |
| hep-python | 211.988 | 211.146 | 624 | 612 | 53 | 49 | 85.6463 | 85.3049 | 190 | 190 | 36 | 35 |
| hep-rust | 221.256 | 219.049 | 988 | 993 | 46 | 49 | 95.1402 | 94.8354 | 192 | 192 | 36 | 36 |
| llmbar-adver-GPTInst | 170.109 | 377.359 | 636 | 959 | 15 | 15 | 92.9457 | 179.37 | 287 | 471 | 12 | 13 |
| llmbar-adver-GPTOut | 96.4255 | 101 | 393 | 476 | 18 | 20 | 60.0426 | 55.0426 | 241 | 228 | 13 | 14 |
| llmbar-adver-manual | 159.804 | 264.37 | 607 | 737 | 23 | 33 | 91.9565 | 140.13 | 273 | 385 | 18 | 24 |
| llmbar-adver-neighbor | 70.2239 | 172.507 | 603 | 865 | 9 | 13 | 43.3134 | 90.9328 | 250 | 324 | 8 | 9 |
| llmbar-natural | 139.42 | 129.82 | 907 | 900 | 17 | 18 | 74.99 | 70.07 | 354 | 352 | 14 | 14 |
| math-prm | 279.313 | 488.841 | 1608 | 1165 | 35 | 77 | 83.6264 | 124.582 | 237 | 257 | 23 | 46 |
| mt-bench-easy | 391.821 | 481.929 | 778 | 1126 | 155 | 31 | 169.071 | 121.321 | 288 | 434 | 74 | 19 |
| mt-bench-hard | 287.784 | 301.649 | 573 | 1176 | 68 | 62 | 133.622 | 121.676 | 261 | 309 | 50 | 48 |
| mt-bench-med | 351.375 | 466.025 | 655 | 1297 | 145 | 52 | 159.9 | 140.325 | 285 | 495 | 82 | 41 |
| refusals-dangerous | 208.4 | 458.61 | 380 | 804 | 87 | 103 | 128.53 | 211 | 200 | 365 | 71 | 55 |
| refusals-offensive | 139.82 | 298.63 | 278 | 1117 | 75 | 26 | 95.98 | 134.02 | 170 | 477 | 60 | 19 |
| xstest-should-refuse | 129.227 | 217.019 | 402 | 549 | 18 | 15 | 80.5519 | 116.149 | 194 | 245 | 16 | 13 |
| xstest-should-respond | 188.708 | 107.356 | 515 | 465 | 20 | 16 | 103.788 | 67.328 | 231 | 202 | 15 | 16 |
### Filtering Summary
The RewardBench dataset is manually filtered from 5123 source prompts to manually verify the chosen-rejected ranking of prompts.
* The categories of AlpacaEval and MT Bench are manually filtered for every prompt.
* LLMBar, DoNotAnswer, HEP, and Math PRM all contained structured metadata for automatic filtering.
* XSTest is a hybrid of manual confirmation with metadata from the project.
* Refusals are automatically generated as a refusal or response (where refusal is preffered) with manual confirmation.
Substantial filtering details are available in the appendix of the papr.
If there are any bugs in the data, please reach out!
### License information
Licensing an aggregated dataset is a complex task.
We release the RewardBench dataset under [ODC-BY](https://opendatacommons.org/licenses/by/) requiring the user to follow the licenses of the subsequent parts.
Licensing LLM datasets is an evolving topic. The licenses primarily apply to the prompts and the completions generated by models are often unlicensed.
The details for the datasets used in this work vary in the level of the detail on licenses and method of applying them.
| Dataset | Variants | Data License |
|---------------|----------------------------------------------------------|------------------------------------------------------|
| AlpacaEval | {Easy, Length, Hard} | [CC By NC 4.0](https://github.com/tatsu-lab/alpaca_farm/blob/main/DATA_LICENSE) |
| MT Bench | {Easy, Medium, Hard} | [Apache 2.0](https://github.com/lm-sys/FastChat/blob/main/LICENSE) |
| LLMBar | {Natural, Neighbor, GPTInst, GPTOut, Manual} | [MIT License](https://github.com/princeton-nlp/LLMBar?tab=MIT-1-ov-file) |
| Do Not Answer | | [CC BY NC SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| XSTest | {Should Respond, Should Refuse} | [CC By 4.0](https://github.com/paul-rottger/exaggerated-safety?tab=CC-BY-4.0-1-ov-file) |
| HumanEvalPack | {HEP CPP, Go, Javascript, Rust, Python, Rust} | [MIT License](https://github.com/bigcode-project/octopack?tab=MIT-1-ov-file) |
| PRM Math | | [MIT License](https://github.com/openai/prm800k?tab=MIT-1-ov-file) |
Within this dataset are prompts created by AI2 (the refusals data, released as MIT for now, see official release soon) with completions from API and open models.
More details will come on this soon.
## Development
### Requirements
Building the dataset requires `datasets`.
Maintaining the script and notebooks requites `notebook`.
```
pip install datasets notebook nbconvert
```
Convert with:
```
jupyter nbconvert --to script [YOUR_NOTEBOOK].ipynb
```
With no changes to the ipynb, the dataset can be re-built and pushed with the following (PLEASE BE CAREFUL):
```
python build_dataset.py
```
### Git LFS notes
If your uploads fail with:
```
Git LFS upload failed: 14% (1/7), 4.2 MB | 0 B/s
(missing) data/train-00000-of-00001.parquet (425c88744455a9b0e7248cdd81fe4716085aae22849798f653f59fc878117a4d)
hint: Your push was rejected due to missing or corrupt local objects.
hint: You can disable this check with: `git config lfs.allowincompletepush true`
```
First fetch all lfs objects:
```
git lfs fetch --all origin main
```
### Filtering script (basic)
To filter data, run the following script:
```
python scripts/filter.py subset-name 0
```
with a subset from the dataset and a start index.
---
## Citation
```
@misc{RewardBench,
title={RewardBench: Evaluating Reward Models for Language Modeling},
author={Lambert, Nathan and Pyatkin, Valentina and Morrison, Jacob and Miranda, LJ and Lin, Bill Yuchen and Chandu, Khyathi and Dziri, Nouha and Kumar, Sachin and Zick, Tom and Choi, Yejin and Smith, Noah A. and Hajishirzi, Hannaneh},
year={2024},
howpublished={\url{https://huggingface.co/spaces/allenai/reward-bench}
}
``` |
nvidia/ChatQA2-Long-SFT-data | nvidia | 2024-09-09T05:39:09Z | 287 | 28 | [
"language:en",
"license:cc-by-nc-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2407.14482",
"region:us",
"RAG",
"conversational QA",
"multi-turn QA",
"QA with context",
"train"
] | [] | 2024-09-08T07:25:03Z | 2 | ---
license: cc-by-nc-2.0
size_categories:
- 10K<n<100K
language:
- en
tags:
- RAG
- conversational QA
- multi-turn QA
- QA with context
- train
configs:
- config_name: long_sft
data_files:
- split: train
path: long_sft/long_sft_QA_train.json
- split: test
path: long_sft/long_sft_QA_dev.json
- config_name: NarrativeQA_131072
data_files:
- split: train
path: NarrativeQA_131072/NarrativeQA_131072_QA_train.json
- split: test
path: NarrativeQA_131072/NarrativeQA_131072_QA_dev.json
---
## Data Description
Here, we release the full long SFT training dataset of [ChatQA2](https://arxiv.org/abs/2407.14482). It consists of two parts: **long_sft** and **NarrativeQA_131072**. The long_sft dataset is built and derived from existing datasets: [LongAlpaca12k](https://github.com/dvlab-research/LongLoRA), GPT-4 samples from [Open Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca), and [Long Data Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections). The NarrativeQA_131072 dataset is synthetically generated from NarrativeQA by adding related paragraphs to the given ground truth summary. For the first two steps training of ChatQA-2, we follow [ChatQA1.5](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data).
For the continue pretraining dataset, we simply follow [Long-Context Data Engineering](https://github.com/FranxYao/Long-Context-Data-Engineering) to generate 10B tokens. **For more information about ChatQA-2, check the [website](https://chatqa2-project.github.io/)!**
## Other Resources
[Llama3-ChatQA-2-8B](https://huggingface.co/nvidia/Llama3-ChatQA-2-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Evaluation Data](https://huggingface.co/nvidia/Llama3-ChatQA-2-70B/tree/main/data)   [Website](https://chatqa2-project.github.io/)   [Paper](https://arxiv.org/abs/2407.14482)
## Training Details
The training follows a three-stage instruction tuning process. For the first two stages, we follow ChatQA-1.5, i.e., the stage-1 uses the SFT data, and the stage-2 uses a blend of SFT data alongside other datasets. The dataset blending ratio for stage-2 is as follows:
- drop: 0.069
- narrativeqa: 0.095
- quoref: 0.026
- ropes: 0.026
- squad1.1: 0.095
- squad2.0: 0.095
- newsqa: 0.095
- tatqa-arithmetic: 0.15
- tatqa-others: 0.08
- synthetic_convqa: 0.3
- sft: 0.2
The stage-3 add the full long SFT dataset to the blend. And the new dataset blending ratio for stage-3 is as follows:
- drop: 0.069
- narrativeqa: 0.095
- quoref: 0.026
- ropes: 0.026
- squad1.1: 0.095
- squad2.0: 0.095
- newsqa: 0.095
- tatqa-arithmetic: 0.15
- tatqa-others: 0.08
- synthetic_convqa: 0.3
- sft: 0.2
- long_sft: 2.5
- NarrativeQA_131072: 5.0
## License
The dataset is released for non-commercial use only, subject to [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI.
## Correspondence to
Peng Xu ([email protected]), Wei Ping ([email protected])
## Citation
<pre>
@article{xu2024chatqa,
title={ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities},
author={Xu, Peng and Ping, Wei and Wu, Xianchao and Liu, Zihan and Shoeybi, Mohammad and Catanzaro, Bryan},
journal={arXiv preprint arXiv:2407.14482},
year={2024}
}
</pre>
|
joujiboi/japanese-anime-speech | joujiboi | 2024-09-06T18:19:39Z | 1,287 | 124 | [
"task_categories:automatic-speech-recognition",
"language:ja",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"anime",
"japanese",
"日本語",
"nihongo",
"speech",
"audio-text",
"asr",
"whisper",
"voice",
"large-v3",
"ja",
"jp"
] | [
"automatic-speech-recognition"
] | 2023-11-07T13:53:40Z | null | ---
language:
- ja
license: cc0-1.0
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
pretty_name: Japanese-Anime-Speech
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 10116168716.932
num_examples: 73004
download_size: 8832932312
dataset_size: 10116168716.932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- anime
- japanese
- 日本語
- nihongo
- speech
- audio-text
- asr
- whisper
- voice
- large-v3
- ja
- jp
---
# Japanese Anime Speech Dataset
[**日本語はこちら**](https://huggingface.co/datasets/joujiboi/japanese-anime-speech/blob/main/README_JA.md)
**japanese-anime-speech** is an audio-text dataset designed for the training of automatic speech recognition models. The dataset is comprised of thousands of audio clips and their corresponding transcriptions from different visual novels.
The goal of this dataset is to increase the accuracy of automatic speech recognition models, such as OpenAI's [Whisper](https://huggingface.co/openai/whisper-large-v2), in accurately transcribing dialogue from anime and other similar Japanese media. This genre is characterized by unique linguistic features and speech patterns that diverge from conventional Japanese speech.
A list of all audio files and transcriptions can be found [**here**](https://huggingface.co/datasets/joujiboi/japanese-anime-speech/raw/main/audio_transcription_list.txt).
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Content Warning:</b> Please be advised that the majority of the audio in this dataset is sourced from visual novels and may include content that is not suitable for all audiences, such as suggestive sounds or mature topics. Efforts have been undertaken to minimise this content as much as possible. </p>
</div>
# Dataset information
* **73,004** audio-text pairs
* **110 hours** of audio (OpenAI suggests a minimum of [5 hours](https://huggingface.co/blog/fine-tune-whisper) for productive [Whisper](https://huggingface.co/openai/whisper-large-v2) fine-tuning).
* **5.4s** average audio length
* Audio source: **visual novels**
* Lastest version: **V5 - March 22nd 2024**
# Changelog
* V1 - This version contains **16,143** audio-text pairs from the visual novel `IxSHE Tell`. Some cleaning of the transcriptions has been done to get rid of unwanted characters at the start and end of lines.
* V2 - The version contains **23,422** audio-text pairs from three different visual novels. Cleaning has been done to remove most nsfw lines, especially noises that aren't words. The audio is now in mp3 format, rather than wav. This version contains **32.6** hours of audio.
* V3 - The version contains **38,325** audio-text pairs from five different visual novels. Thorough cleaning has been done to remove most nsfw or low-quality audio files. Transcriptions have been formatted to contain much fewer dramatised duplicated characters (for example 「ああああーーー」), and transcriptions have been made much more consistent. This version contains **52.5 hours** of audio.
* V4 - The dataset contains **47,844** audio-text pairs from six different visual novels. Thorough cleaning has been done to remove most nsfw or low-quality audio files. This version contains **63.4 hours** of audio.
* **V5** - The dataset contains **73,004** audio-text pairs from eight different visual novels. Thorough cleaning has been done to remove most nsfw or low-quality audio files. This version contains **110 hours** of audio.
# Bias and Limitations
This dataset, while valuable for training anime-style Japanese speech recognition, has some inherent biases and limitations. The audio is primarily sourced from visual novels, leading to a gender bias towards female voices and a domain-specific vocabulary revolving around topics such as love, relationships, and fantasy. Additionally, the professionally produced nature of the audio results in clear and slow speech, which may not fully reflect real-world speaking patterns.
# Use & Credit
This dataset is openly available for commercial or non-commercial use. Anyone is welcome to use this dataset as they deem appropriate. However, the creator assumes no responsibility for the consequences of its use. While not mandatory, crediting this dataset with a hyperlink in any derivative work would be greatly appreciated.
I hope that by sharing this dataset, we (the open-source community) improve automatic speech recognition for anime content. |
MoreTorque/rlogs | MoreTorque | 2024-09-06T18:18:47Z | 10,867 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-06-22T21:22:44Z | null | ---
license: apache-2.0
---
## Dataset Overview
This dataset consists of bz2 compressed rlogs captured from OpenPilot. These logs do not contain any user-sensitive information.
The primary purpose of this dataset is to facilitate data-driven solutions for improving lateral and longitudinal control in community-supported vehicles.
# Contents
The rlogs are composed of various events as defined in the OpenPilot log schema.
To ensure user privacy, many event types have been excluded. The following is the list of events that remain in the log files:
```
* CarState
* LiveParameters
* CarControl
* LateralPlanDEPRECATED
* CarOutput
* CarParams
* ModelV2
* LiveTorqueParameters
* LiveCalibration
* Sendcan
* Can
* LongitudinalPlan
* LiveLocationKalman ( Only OrientaionNED, CalibratedOrientaionNED, and AngularVelocityCalibrated have been kept ) |
4DR1455/finance_questions | 4DR1455 | 2024-09-05T10:59:48Z | 193 | 11 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-20T11:17:35Z | 2 | ---
language:
- en
---
Financial-instruction-aq22 dataset from DeividasM but with "input" to make it easier to use on a fine tunning template. Still working on it. |
InfiniFlow/medical_QA | InfiniFlow | 2024-09-05T03:23:52Z | 187 | 27 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-09-05T02:20:22Z | 2 | ---
license: apache-2.0
---
|
alexandrainst/ftspeech | alexandrainst | 2024-09-04T15:01:21Z | 30,392 | 3 | [
"task_categories:automatic-speech-recognition",
"language:da",
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2005.12368",
"region:us"
] | [
"automatic-speech-recognition"
] | 2023-05-11T13:08:57Z | null | ---
dataset_info:
features:
- name: utterance_id
dtype: string
- name: speaker_gender
dtype: string
- name: sentence
dtype: string
- name: speaker_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 209434570129.268
num_examples: 995677
- name: dev_balanced
num_bytes: 579692770.829
num_examples: 2601
- name: dev_other
num_bytes: 1725502342.095
num_examples: 7595
- name: test_balanced
num_bytes: 1158740779.222
num_examples: 5534
- name: test_other
num_bytes: 1254987645.527
num_examples: 5837
download_size: 101776974871
dataset_size: 214153493666.941
task_categories:
- automatic-speech-recognition
language:
- da
pretty_name: FT Speech
size_categories:
- 100K<n<1M
license: other
---
# Dataset Card for FT Speech
## Dataset Description
- **Repository:** <https://ftspeech.github.io/>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 101.78 GB
- **Size of the generated dataset:** 214.15 GB
- **Total amount of disk used:** 315.93 GB
### Dataset Summary
This dataset is an upload of the [FT Speech dataset](https://ftspeech.github.io/).
The training, validation and test splits are the original ones.
### Supported Tasks and Leaderboards
Training automatic speech recognition is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 101.78 GB
- **Size of the generated dataset:** 214.15 GB
- **Total amount of disk used:** 315.93 GB
An example from the dataset looks as follows.
```
{
'utterance_id': 'S001_20151_M012_P00034-2',
'speaker_gender': 'F',
'sentence': 'alle de fem tekniske justeringer der er en del af lovforslaget',
'speaker_id': 'S001',
'audio': {
'path': 'S001_20151_M012_P00034-2.wav',
'array': array([-3.75366211e-03, -5.27954102e-03, -3.87573242e-03, ...,
9.15527344e-05, -1.52587891e-04, 5.79833984e-04]),
'sampling_rate': 16000
}
}
```
### Data Fields
The data fields are the same among all splits.
- `utterance_id`: a `string` feature.
- `speaker_gender`: a `string` feature.
- `sentence`: a `string` feature.
- `speaker_id`: a `string` feature.
- `audio`: an `Audio` feature.
### Dataset Statistics
There are 995,677 samples in the training split, 2,601 in the dev_balanced split, 7,595 in the dev_other split, 5,534 in the test_balanced and 5,837 in the test_other split.
#### Speakers
There are 374 unique speakers in the training dataset, 20 unique speakers in the validation dataset and 40 unique speakers in the test dataset. None of the dataset splits share any speakers.
#### Gender Distribution

#### Transcription Length Distribution

## Dataset Creation
### Curation Rationale
There are not many large-scale ASR datasets in Danish.
### Source Data
The data constitutes public recordings of sessions from the Danish Parliament, along with manual transcriptions.
## Additional Information
### Dataset Curators
Andreas Kirkedal, Marija Stepanović and Barbara Plank curated the dataset as part of their FT Speech paper (see citation below).
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) reorganised the dataset and uploaded it to the Hugging Face Hub.
### Licensing Information
The dataset is licensed under [this custom license](https://www.ft.dk/da/aktuelt/tv-fra-folketinget/deling-og-rettigheder).
### Citation
```
@inproceedings{ftspeech,
author = {Kirkedal, Andreas and Stepanović, Marija and Plank, Barbara},
title = {{FT Speech: Danish Parliament Speech Corpus}},
booktitle = {Proc. Interspeech 2020},
year = {2020},
url = {arxiv.org/abs/2005.12368}
}
``` |
BleachNick/UltraEdit | BleachNick | 2024-08-31T13:49:21Z | 52,807 | 8 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-4.0",
"arxiv:2407.05282",
"doi:10.57967/hf/2481",
"region:us",
"art"
] | [
"text-to-image"
] | 2024-06-09T11:02:13Z | null | ---
language:
- en
license: cc-by-4.0
task_categories:
- text-to-image
dataset_info:
features:
- name: clip_sim_source
dtype: float64
- name: clip_sim_target
dtype: float64
- name: clip_sim_dir
dtype: float64
- name: clip_sim_image
dtype: float64
- name: dinov2_sim
dtype: float64
- name: ssim
dtype: float64
- name: source_caption
dtype: string
- name: target_caption
dtype: string
- name: idx
dtype: int64
- name: edit_prompt
dtype: string
- name: edit_object
dtype: 'null'
- name: source_image
dtype: image
- name: edited_image
dtype: image
- name: mask_image
dtype: 'null'
splits:
- name: FreeForm_0
num_bytes: 759385792
num_examples: 2000
- name: FreeForm_1
num_bytes: 756874067
num_examples: 2000
- name: FreeForm_2
num_bytes: 759183069
num_examples: 2000
- name: FreeForm_3
num_bytes: 755508440
num_examples: 2000
- name: FreeForm_4
num_bytes: 756540442
num_examples: 2000
- name: FreeForm_5
num_bytes: 758622320
num_examples: 2000
- name: FreeForm_6
num_bytes: 761524774
num_examples: 2000
- name: FreeForm_7
num_bytes: 758775293
num_examples: 2000
- name: FreeForm_8
num_bytes: 760200313
num_examples: 2000
- name: FreeForm_9
num_bytes: 768448051
num_examples: 2000
- name: FreeForm_10
num_bytes: 773489315
num_examples: 2000
- name: FreeForm_11
num_bytes: 778109354
num_examples: 2000
- name: FreeForm_12
num_bytes: 778512114
num_examples: 2000
- name: FreeForm_13
num_bytes: 768485969
num_examples: 2000
- name: FreeForm_14
num_bytes: 779712509
num_examples: 2000
- name: FreeForm_15
num_bytes: 765837533
num_examples: 2000
- name: FreeForm_16
num_bytes: 769511714
num_examples: 2000
- name: FreeForm_17
num_bytes: 771149850
num_examples: 2000
- name: FreeForm_18
num_bytes: 771410726
num_examples: 2000
- name: FreeForm_19
num_bytes: 770722184
num_examples: 2000
- name: FreeForm_20
num_bytes: 783276398
num_examples: 2000
- name: FreeForm_21
num_bytes: 776884755
num_examples: 2000
- name: FreeForm_22
num_bytes: 783258028
num_examples: 2000
- name: FreeForm_23
num_bytes: 781541694
num_examples: 2000
- name: FreeForm_24
num_bytes: 781306379
num_examples: 2000
- name: FreeForm_25
num_bytes: 777818799
num_examples: 2000
- name: FreeForm_26
num_bytes: 778351829
num_examples: 2000
- name: FreeForm_27
num_bytes: 778407074
num_examples: 2000
- name: FreeForm_28
num_bytes: 776257503
num_examples: 2000
- name: FreeForm_29
num_bytes: 779274036
num_examples: 2000
- name: FreeForm_30
num_bytes: 779300944
num_examples: 2000
- name: FreeForm_31
num_bytes: 775309985
num_examples: 2000
- name: FreeForm_32
num_bytes: 779442636
num_examples: 2000
- name: FreeForm_33
num_bytes: 784142063
num_examples: 2000
- name: FreeForm_34
num_bytes: 781037956
num_examples: 2000
- name: FreeForm_35
num_bytes: 783237883
num_examples: 2000
- name: FreeForm_36
num_bytes: 782420508
num_examples: 2000
- name: FreeForm_37
num_bytes: 778974851
num_examples: 2000
- name: FreeForm_38
num_bytes: 781037000
num_examples: 2000
- name: FreeForm_39
num_bytes: 780728965
num_examples: 2000
- name: FreeForm_40
num_bytes: 781624433
num_examples: 2000
- name: FreeForm_41
num_bytes: 782390249
num_examples: 2000
- name: FreeForm_42
num_bytes: 780332512
num_examples: 2000
- name: FreeForm_43
num_bytes: 785691458
num_examples: 2000
- name: FreeForm_44
num_bytes: 774303123
num_examples: 2000
- name: FreeForm_45
num_bytes: 775698594
num_examples: 2000
- name: FreeForm_46
num_bytes: 792219548
num_examples: 2000
- name: FreeForm_47
num_bytes: 779527180
num_examples: 2000
- name: FreeForm_48
num_bytes: 768255127
num_examples: 2000
- name: FreeForm_49
num_bytes: 780377695
num_examples: 2000
- name: FreeForm_50
num_bytes: 780951915
num_examples: 2000
- name: FreeForm_51
num_bytes: 781476572
num_examples: 2000
- name: FreeForm_52
num_bytes: 778819875
num_examples: 2000
- name: FreeForm_53
num_bytes: 780021360
num_examples: 2000
- name: FreeForm_54
num_bytes: 780353501
num_examples: 2000
- name: FreeForm_55
num_bytes: 780989870
num_examples: 2000
- name: FreeForm_56
num_bytes: 790152972
num_examples: 2000
- name: FreeForm_57
num_bytes: 773017463
num_examples: 2000
- name: FreeForm_58
num_bytes: 785315245
num_examples: 2000
- name: FreeForm_59
num_bytes: 783225063
num_examples: 2000
- name: FreeForm_60
num_bytes: 779732938
num_examples: 2000
- name: FreeForm_61
num_bytes: 775300360
num_examples: 2000
- name: FreeForm_62
num_bytes: 787277550
num_examples: 2000
- name: FreeForm_63
num_bytes: 785273008
num_examples: 2000
- name: FreeForm_64
num_bytes: 781745081
num_examples: 2000
- name: FreeForm_65
num_bytes: 774655340
num_examples: 2000
- name: FreeForm_66
num_bytes: 786214063
num_examples: 2000
- name: FreeForm_67
num_bytes: 780515365
num_examples: 2000
- name: FreeForm_68
num_bytes: 781112419
num_examples: 2000
- name: FreeForm_69
num_bytes: 784807337
num_examples: 2000
- name: FreeForm_70
num_bytes: 792820805
num_examples: 2000
- name: FreeForm_71
num_bytes: 779452329
num_examples: 2000
- name: FreeForm_72
num_bytes: 782202231
num_examples: 2000
- name: FreeForm_73
num_bytes: 780102581
num_examples: 2000
- name: FreeForm_74
num_bytes: 778207590
num_examples: 2000
- name: FreeForm_75
num_bytes: 773440925
num_examples: 2000
- name: FreeForm_76
num_bytes: 776717338
num_examples: 2000
- name: FreeForm_77
num_bytes: 782872533
num_examples: 2000
- name: FreeForm_78
num_bytes: 781570187
num_examples: 2000
- name: FreeForm_79
num_bytes: 777108477
num_examples: 2000
- name: FreeForm_80
num_bytes: 782422774
num_examples: 2000
- name: FreeForm_81
num_bytes: 780493074
num_examples: 2000
- name: FreeForm_82
num_bytes: 784737791
num_examples: 2000
- name: FreeForm_83
num_bytes: 772319242
num_examples: 2000
- name: FreeForm_84
num_bytes: 783158436
num_examples: 2000
- name: FreeForm_85
num_bytes: 777733688
num_examples: 2000
- name: FreeForm_86
num_bytes: 788418673
num_examples: 2000
- name: FreeForm_87
num_bytes: 785653901
num_examples: 2000
- name: FreeForm_88
num_bytes: 779811756
num_examples: 2000
- name: FreeForm_89
num_bytes: 781032025
num_examples: 2000
- name: FreeForm_90
num_bytes: 782448048
num_examples: 2000
- name: FreeForm_91
num_bytes: 789579728
num_examples: 2000
- name: FreeForm_92
num_bytes: 785851472
num_examples: 2000
- name: FreeForm_93
num_bytes: 776616321
num_examples: 2000
- name: FreeForm_94
num_bytes: 772441019
num_examples: 2000
- name: FreeForm_95
num_bytes: 777885007
num_examples: 2000
- name: FreeForm_96
num_bytes: 779615563
num_examples: 2000
- name: FreeForm_97
num_bytes: 781932881
num_examples: 2000
- name: FreeForm_98
num_bytes: 778767405
num_examples: 2000
- name: FreeForm_99
num_bytes: 781249553
num_examples: 2000
- name: FreeForm_100
num_bytes: 777582777
num_examples: 2000
- name: FreeForm_101
num_bytes: 789079489
num_examples: 2000
- name: FreeForm_102
num_bytes: 773798368
num_examples: 2000
- name: FreeForm_103
num_bytes: 777652284
num_examples: 2000
- name: FreeForm_104
num_bytes: 782763557
num_examples: 2000
- name: FreeForm_105
num_bytes: 775572386
num_examples: 2000
- name: FreeForm_106
num_bytes: 782503475
num_examples: 2000
- name: FreeForm_107
num_bytes: 779729667
num_examples: 2000
- name: FreeForm_108
num_bytes: 785032491
num_examples: 2000
- name: FreeForm_109
num_bytes: 774752941
num_examples: 2000
- name: FreeForm_110
num_bytes: 776262712
num_examples: 2000
- name: FreeForm_111
num_bytes: 780328424
num_examples: 2000
- name: FreeForm_112
num_bytes: 782706800
num_examples: 2000
- name: FreeForm_113
num_bytes: 778603762
num_examples: 2000
- name: FreeForm_114
num_bytes: 781562793
num_examples: 2000
- name: FreeForm_115
num_bytes: 782963964
num_examples: 2000
- name: FreeForm_116
num_bytes: 771866357
num_examples: 2000
- name: FreeForm_117
num_bytes: 769456958
num_examples: 2000
- name: FreeForm_118
num_bytes: 778196876
num_examples: 2000
- name: FreeForm_119
num_bytes: 787450589
num_examples: 2000
- name: FreeForm_120
num_bytes: 788257623
num_examples: 2000
- name: FreeForm_121
num_bytes: 774218024
num_examples: 2000
- name: FreeForm_122
num_bytes: 777310894
num_examples: 2000
- name: FreeForm_123
num_bytes: 782304214
num_examples: 2000
- name: FreeForm_124
num_bytes: 787668207
num_examples: 2000
- name: FreeForm_125
num_bytes: 782149440
num_examples: 2000
- name: FreeForm_126
num_bytes: 772279923
num_examples: 2000
- name: FreeForm_127
num_bytes: 782051230
num_examples: 2000
- name: FreeForm_128
num_bytes: 779704525
num_examples: 2000
- name: FreeForm_129
num_bytes: 784954990
num_examples: 2000
- name: FreeForm_130
num_bytes: 783706718
num_examples: 2000
- name: FreeForm_131
num_bytes: 778920587
num_examples: 2000
- name: FreeForm_132
num_bytes: 777609528
num_examples: 2000
- name: FreeForm_133
num_bytes: 776108392
num_examples: 2000
- name: FreeForm_134
num_bytes: 773424215
num_examples: 2000
- name: FreeForm_135
num_bytes: 783577402
num_examples: 2000
- name: FreeForm_136
num_bytes: 781872028
num_examples: 2000
- name: FreeForm_137
num_bytes: 784396076
num_examples: 2000
- name: FreeForm_138
num_bytes: 782096650
num_examples: 2000
- name: FreeForm_139
num_bytes: 778830416
num_examples: 2000
- name: FreeForm_140
num_bytes: 786000079
num_examples: 2000
- name: FreeForm_141
num_bytes: 781664498
num_examples: 2000
- name: FreeForm_142
num_bytes: 791069332
num_examples: 2000
- name: FreeForm_143
num_bytes: 785025567
num_examples: 2000
- name: FreeForm_144
num_bytes: 777105450
num_examples: 2000
- name: FreeForm_145
num_bytes: 781311359
num_examples: 2000
- name: FreeForm_146
num_bytes: 779953680
num_examples: 2000
- name: FreeForm_147
num_bytes: 787964927
num_examples: 2000
- name: FreeForm_148
num_bytes: 781275038
num_examples: 2000
- name: FreeForm_149
num_bytes: 787792527
num_examples: 2000
- name: FreeForm_150
num_bytes: 775254416
num_examples: 2000
- name: FreeForm_151
num_bytes: 775985702
num_examples: 2000
- name: FreeForm_152
num_bytes: 774217627
num_examples: 2000
- name: FreeForm_153
num_bytes: 785218355
num_examples: 2000
- name: FreeForm_154
num_bytes: 778486283
num_examples: 2000
- name: FreeForm_155
num_bytes: 782013722
num_examples: 2000
- name: FreeForm_156
num_bytes: 781868361
num_examples: 2000
- name: FreeForm_157
num_bytes: 775308631
num_examples: 2000
- name: FreeForm_158
num_bytes: 774627734
num_examples: 2000
- name: FreeForm_159
num_bytes: 793847051
num_examples: 2000
- name: FreeForm_160
num_bytes: 778008360
num_examples: 2000
- name: FreeForm_161
num_bytes: 779105315
num_examples: 2000
- name: FreeForm_162
num_bytes: 774827779
num_examples: 2000
- name: FreeForm_163
num_bytes: 782014203
num_examples: 2000
- name: FreeForm_164
num_bytes: 777132570
num_examples: 2000
- name: FreeForm_165
num_bytes: 776191239
num_examples: 2000
- name: FreeForm_166
num_bytes: 783015253
num_examples: 2000
- name: FreeForm_167
num_bytes: 785442481
num_examples: 2000
- name: FreeForm_168
num_bytes: 776184901
num_examples: 2000
- name: FreeForm_169
num_bytes: 778378698
num_examples: 2000
- name: FreeForm_170
num_bytes: 779983316
num_examples: 2000
- name: FreeForm_171
num_bytes: 782247431
num_examples: 2000
- name: FreeForm_172
num_bytes: 778287241
num_examples: 2000
- name: FreeForm_173
num_bytes: 783732214
num_examples: 2000
- name: FreeForm_174
num_bytes: 784645727
num_examples: 2000
- name: FreeForm_175
num_bytes: 780535529
num_examples: 2000
- name: FreeForm_176
num_bytes: 775325249
num_examples: 2000
- name: FreeForm_177
num_bytes: 781466592
num_examples: 2000
- name: FreeForm_178
num_bytes: 787145952
num_examples: 2000
- name: FreeForm_179
num_bytes: 780889603
num_examples: 2000
- name: FreeForm_180
num_bytes: 773684169
num_examples: 2000
- name: FreeForm_181
num_bytes: 788912563
num_examples: 2000
- name: FreeForm_182
num_bytes: 785582121
num_examples: 2000
- name: FreeForm_183
num_bytes: 784626591
num_examples: 2000
- name: FreeForm_184
num_bytes: 790547359
num_examples: 2000
- name: FreeForm_185
num_bytes: 784622676
num_examples: 2000
- name: FreeForm_186
num_bytes: 769870952
num_examples: 2000
- name: FreeForm_187
num_bytes: 778273211
num_examples: 2000
- name: FreeForm_188
num_bytes: 773796454
num_examples: 2000
- name: FreeForm_189
num_bytes: 789263531
num_examples: 2000
- name: FreeForm_190
num_bytes: 775580113
num_examples: 2000
- name: FreeForm_191
num_bytes: 774644337
num_examples: 2000
- name: FreeForm_192
num_bytes: 779218306
num_examples: 2000
- name: FreeForm_193
num_bytes: 782789594
num_examples: 2000
- name: FreeForm_194
num_bytes: 778522221
num_examples: 2000
- name: FreeForm_195
num_bytes: 769927305
num_examples: 2000
- name: FreeForm_196
num_bytes: 787652053
num_examples: 2000
- name: FreeForm_197
num_bytes: 781281999
num_examples: 2000
- name: FreeForm_198
num_bytes: 784173619
num_examples: 2000
- name: FreeForm_199
num_bytes: 780085733
num_examples: 2000
- name: FreeForm_200
num_bytes: 784857406
num_examples: 2000
- name: FreeForm_201
num_bytes: 781521869
num_examples: 2000
- name: FreeForm_202
num_bytes: 779589554
num_examples: 2000
- name: FreeForm_203
num_bytes: 781196442
num_examples: 2000
- name: FreeForm_204
num_bytes: 772955630
num_examples: 2000
- name: FreeForm_205
num_bytes: 784267323
num_examples: 2000
- name: FreeForm_206
num_bytes: 775806104
num_examples: 2000
- name: FreeForm_207
num_bytes: 779673572
num_examples: 2000
- name: FreeForm_208
num_bytes: 782927457
num_examples: 2000
- name: FreeForm_209
num_bytes: 782826891
num_examples: 2000
- name: FreeForm_210
num_bytes: 784130072
num_examples: 2000
- name: FreeForm_211
num_bytes: 774395254
num_examples: 2000
- name: FreeForm_212
num_bytes: 780161197
num_examples: 2000
- name: FreeForm_213
num_bytes: 774990162
num_examples: 2000
- name: FreeForm_214
num_bytes: 780745487
num_examples: 2000
- name: FreeForm_215
num_bytes: 775570186
num_examples: 2000
- name: FreeForm_216
num_bytes: 780406810
num_examples: 2000
- name: FreeForm_217
num_bytes: 783843708
num_examples: 2000
- name: FreeForm_218
num_bytes: 774349485
num_examples: 2000
- name: FreeForm_219
num_bytes: 786409937
num_examples: 2000
- name: FreeForm_220
num_bytes: 780250550
num_examples: 2000
- name: FreeForm_221
num_bytes: 781397833
num_examples: 2000
- name: FreeForm_222
num_bytes: 787266266
num_examples: 2000
- name: FreeForm_223
num_bytes: 771635959
num_examples: 2000
- name: FreeForm_224
num_bytes: 788040561
num_examples: 2000
- name: FreeForm_225
num_bytes: 779481600
num_examples: 2000
- name: FreeForm_226
num_bytes: 778119416
num_examples: 2000
- name: FreeForm_227
num_bytes: 786426591
num_examples: 2000
- name: FreeForm_228
num_bytes: 775824969
num_examples: 2000
- name: FreeForm_229
num_bytes: 786598208
num_examples: 2000
- name: FreeForm_230
num_bytes: 783115035
num_examples: 2000
- name: FreeForm_231
num_bytes: 777076410
num_examples: 2000
- name: FreeForm_232
num_bytes: 785489709
num_examples: 2000
- name: FreeForm_233
num_bytes: 775771458
num_examples: 2000
- name: FreeForm_234
num_bytes: 778795846
num_examples: 2000
- name: FreeForm_235
num_bytes: 779495945
num_examples: 2000
- name: FreeForm_236
num_bytes: 781436749
num_examples: 2000
- name: FreeForm_237
num_bytes: 779702535
num_examples: 2000
- name: FreeForm_238
num_bytes: 773483348
num_examples: 2000
- name: FreeForm_239
num_bytes: 781337701
num_examples: 2000
- name: FreeForm_240
num_bytes: 777999808
num_examples: 2000
- name: FreeForm_241
num_bytes: 785732711
num_examples: 2000
- name: FreeForm_242
num_bytes: 777647724
num_examples: 2000
- name: FreeForm_243
num_bytes: 782510547
num_examples: 2000
- name: FreeForm_244
num_bytes: 773293727
num_examples: 2000
- name: FreeForm_245
num_bytes: 773450169
num_examples: 2000
- name: FreeForm_246
num_bytes: 782072573
num_examples: 2000
- name: FreeForm_247
num_bytes: 772425825
num_examples: 2000
- name: FreeForm_248
num_bytes: 770148042
num_examples: 2000
- name: FreeForm_249
num_bytes: 780730753
num_examples: 2000
- name: FreeForm_250
num_bytes: 782650664
num_examples: 2000
- name: FreeForm_251
num_bytes: 786425992
num_examples: 2000
- name: FreeForm_252
num_bytes: 787061462
num_examples: 2000
- name: FreeForm_253
num_bytes: 776669565
num_examples: 2000
- name: FreeForm_254
num_bytes: 781733768
num_examples: 2000
- name: FreeForm_255
num_bytes: 776445040
num_examples: 2000
- name: FreeForm_256
num_bytes: 788620171
num_examples: 2000
- name: FreeForm_257
num_bytes: 775265570
num_examples: 2000
- name: FreeForm_258
num_bytes: 772003631
num_examples: 2000
- name: FreeForm_259
num_bytes: 779408477
num_examples: 2000
- name: FreeForm_260
num_bytes: 779275862
num_examples: 2000
- name: FreeForm_261
num_bytes: 781520055
num_examples: 2000
- name: FreeForm_262
num_bytes: 776835207
num_examples: 2000
- name: FreeForm_263
num_bytes: 775937930
num_examples: 2000
- name: FreeForm_264
num_bytes: 779653131
num_examples: 2000
- name: FreeForm_265
num_bytes: 777888893
num_examples: 2000
- name: FreeForm_266
num_bytes: 781868504
num_examples: 2000
- name: FreeForm_267
num_bytes: 782852767
num_examples: 2000
- name: FreeForm_268
num_bytes: 775652379
num_examples: 2000
- name: FreeForm_269
num_bytes: 779021453
num_examples: 2000
- name: FreeForm_270
num_bytes: 775406430
num_examples: 2000
- name: FreeForm_271
num_bytes: 783074385
num_examples: 2000
- name: FreeForm_272
num_bytes: 789294928
num_examples: 2000
- name: FreeForm_273
num_bytes: 791956763
num_examples: 2000
- name: FreeForm_274
num_bytes: 781284476
num_examples: 2000
- name: FreeForm_275
num_bytes: 774852559
num_examples: 2000
- name: FreeForm_276
num_bytes: 780282411
num_examples: 2000
- name: FreeForm_277
num_bytes: 785429026
num_examples: 2000
- name: FreeForm_278
num_bytes: 788139052
num_examples: 2000
- name: FreeForm_279
num_bytes: 778927364
num_examples: 2000
- name: FreeForm_280
num_bytes: 786443524
num_examples: 2000
- name: FreeForm_281
num_bytes: 779796091
num_examples: 2000
- name: FreeForm_282
num_bytes: 771796749
num_examples: 2000
- name: FreeForm_283
num_bytes: 780077185
num_examples: 2000
- name: FreeForm_284
num_bytes: 782657092
num_examples: 2000
- name: FreeForm_285
num_bytes: 777876608
num_examples: 2000
- name: FreeForm_286
num_bytes: 784147879
num_examples: 2000
- name: FreeForm_287
num_bytes: 775759029
num_examples: 2000
- name: FreeForm_288
num_bytes: 779561520
num_examples: 2000
- name: FreeForm_289
num_bytes: 777921916
num_examples: 2000
- name: FreeForm_290
num_bytes: 783983438
num_examples: 2000
- name: FreeForm_291
num_bytes: 780372433
num_examples: 2000
- name: FreeForm_292
num_bytes: 777431434
num_examples: 2000
- name: FreeForm_293
num_bytes: 779945807
num_examples: 2000
- name: FreeForm_294
num_bytes: 777725518
num_examples: 2000
- name: FreeForm_295
num_bytes: 778340933
num_examples: 2000
- name: FreeForm_296
num_bytes: 781648759
num_examples: 2000
- name: FreeForm_297
num_bytes: 781175078
num_examples: 2000
- name: FreeForm_298
num_bytes: 780131274
num_examples: 2000
- name: FreeForm_299
num_bytes: 784700521
num_examples: 2000
- name: FreeForm_300
num_bytes: 778730053
num_examples: 2000
- name: FreeForm_301
num_bytes: 777866814
num_examples: 2000
- name: FreeForm_302
num_bytes: 790628419
num_examples: 2000
- name: FreeForm_303
num_bytes: 783583996
num_examples: 2000
- name: FreeForm_304
num_bytes: 776221743
num_examples: 2000
- name: FreeForm_305
num_bytes: 783094650
num_examples: 2000
- name: FreeForm_306
num_bytes: 773021721
num_examples: 2000
- name: FreeForm_307
num_bytes: 779988657
num_examples: 2000
- name: FreeForm_308
num_bytes: 776359081
num_examples: 2000
- name: FreeForm_309
num_bytes: 784100482
num_examples: 2000
- name: FreeForm_310
num_bytes: 785281984
num_examples: 2000
- name: FreeForm_311
num_bytes: 781660370
num_examples: 2000
- name: FreeForm_312
num_bytes: 778110445
num_examples: 2000
- name: FreeForm_313
num_bytes: 778756717
num_examples: 2000
- name: FreeForm_314
num_bytes: 774237002
num_examples: 2000
- name: FreeForm_315
num_bytes: 780659451
num_examples: 2000
- name: FreeForm_316
num_bytes: 774442869
num_examples: 2000
- name: FreeForm_317
num_bytes: 774284694
num_examples: 2000
- name: FreeForm_318
num_bytes: 784436923
num_examples: 2000
- name: FreeForm_319
num_bytes: 784750776
num_examples: 2000
- name: FreeForm_320
num_bytes: 787640447
num_examples: 2000
- name: FreeForm_321
num_bytes: 783188398
num_examples: 2000
- name: FreeForm_322
num_bytes: 791492001
num_examples: 2000
- name: FreeForm_323
num_bytes: 774960969
num_examples: 2000
- name: FreeForm_324
num_bytes: 775398547
num_examples: 2000
- name: FreeForm_325
num_bytes: 770380367
num_examples: 2000
- name: FreeForm_326
num_bytes: 773936182
num_examples: 2000
- name: FreeForm_327
num_bytes: 775264472
num_examples: 2000
- name: FreeForm_328
num_bytes: 780866391
num_examples: 2000
- name: FreeForm_329
num_bytes: 789020513
num_examples: 2000
- name: FreeForm_330
num_bytes: 773526935
num_examples: 2000
- name: FreeForm_331
num_bytes: 783571566
num_examples: 2000
- name: FreeForm_332
num_bytes: 778752371
num_examples: 2000
- name: FreeForm_333
num_bytes: 782824491
num_examples: 2000
- name: FreeForm_334
num_bytes: 782375700
num_examples: 2000
- name: FreeForm_335
num_bytes: 779975126
num_examples: 2000
- name: FreeForm_336
num_bytes: 785340907
num_examples: 2000
- name: FreeForm_337
num_bytes: 780481911
num_examples: 2000
- name: FreeForm_338
num_bytes: 783014758
num_examples: 2000
- name: FreeForm_339
num_bytes: 779971436
num_examples: 2000
- name: FreeForm_340
num_bytes: 788146419
num_examples: 2000
- name: FreeForm_341
num_bytes: 785031133
num_examples: 2000
- name: FreeForm_342
num_bytes: 786154283
num_examples: 2000
- name: FreeForm_343
num_bytes: 785252303
num_examples: 2000
- name: FreeForm_344
num_bytes: 776938406
num_examples: 2000
- name: FreeForm_345
num_bytes: 775022040
num_examples: 2000
- name: FreeForm_346
num_bytes: 781089177
num_examples: 2000
- name: FreeForm_347
num_bytes: 785469537
num_examples: 2000
- name: FreeForm_348
num_bytes: 780504204
num_examples: 2000
- name: FreeForm_349
num_bytes: 781497921
num_examples: 2000
- name: FreeForm_350
num_bytes: 786463404
num_examples: 2000
- name: FreeForm_351
num_bytes: 778226591
num_examples: 2000
- name: FreeForm_352
num_bytes: 780587554
num_examples: 2000
- name: FreeForm_353
num_bytes: 772724851
num_examples: 2000
- name: FreeForm_354
num_bytes: 784892618
num_examples: 2000
- name: FreeForm_355
num_bytes: 780154389
num_examples: 2000
- name: FreeForm_356
num_bytes: 780139782
num_examples: 2000
- name: FreeForm_357
num_bytes: 783152771
num_examples: 2000
- name: FreeForm_358
num_bytes: 770762762
num_examples: 2000
- name: FreeForm_359
num_bytes: 781486281
num_examples: 2000
- name: FreeForm_360
num_bytes: 784878072
num_examples: 2000
- name: FreeForm_361
num_bytes: 767497077
num_examples: 2000
- name: FreeForm_362
num_bytes: 774209420
num_examples: 2000
- name: FreeForm_363
num_bytes: 775852671
num_examples: 2000
- name: FreeForm_364
num_bytes: 779265355
num_examples: 2000
- name: FreeForm_365
num_bytes: 778746781
num_examples: 2000
- name: FreeForm_366
num_bytes: 780292561
num_examples: 2000
- name: FreeForm_367
num_bytes: 783437604
num_examples: 2000
- name: FreeForm_368
num_bytes: 780490744
num_examples: 2000
- name: FreeForm_369
num_bytes: 784701592
num_examples: 2000
- name: FreeForm_370
num_bytes: 782231635
num_examples: 2000
- name: FreeForm_371
num_bytes: 773713131
num_examples: 2000
- name: FreeForm_372
num_bytes: 780881398
num_examples: 2000
- name: FreeForm_373
num_bytes: 772866562
num_examples: 2000
- name: FreeForm_374
num_bytes: 784456218
num_examples: 2000
- name: FreeForm_375
num_bytes: 781234237
num_examples: 2000
- name: FreeForm_376
num_bytes: 774670015
num_examples: 2000
- name: FreeForm_377
num_bytes: 780022530
num_examples: 2000
- name: FreeForm_378
num_bytes: 786354737
num_examples: 2000
- name: FreeForm_379
num_bytes: 778620546
num_examples: 2000
- name: FreeForm_380
num_bytes: 786067236
num_examples: 2000
- name: FreeForm_381
num_bytes: 783392920
num_examples: 2000
- name: FreeForm_382
num_bytes: 777015603
num_examples: 2000
- name: FreeForm_383
num_bytes: 777137904
num_examples: 2000
- name: FreeForm_384
num_bytes: 775646114
num_examples: 2000
- name: FreeForm_385
num_bytes: 778114996
num_examples: 2000
- name: FreeForm_386
num_bytes: 783206115
num_examples: 2000
- name: FreeForm_387
num_bytes: 783861784
num_examples: 2000
- name: FreeForm_388
num_bytes: 780998933
num_examples: 2000
- name: FreeForm_389
num_bytes: 784625672
num_examples: 2000
- name: FreeForm_390
num_bytes: 772741099
num_examples: 2000
- name: FreeForm_391
num_bytes: 774029608
num_examples: 2000
- name: FreeForm_392
num_bytes: 785257091
num_examples: 2000
- name: FreeForm_393
num_bytes: 780062712
num_examples: 2000
- name: FreeForm_394
num_bytes: 773189878
num_examples: 2000
- name: FreeForm_395
num_bytes: 773945343
num_examples: 2000
- name: FreeForm_396
num_bytes: 786040164
num_examples: 2000
- name: FreeForm_397
num_bytes: 776739162
num_examples: 2000
- name: FreeForm_398
num_bytes: 780130285
num_examples: 2000
- name: FreeForm_399
num_bytes: 779288968
num_examples: 2000
- name: FreeForm_400
num_bytes: 780563799
num_examples: 2000
- name: FreeForm_401
num_bytes: 777749497
num_examples: 2000
- name: FreeForm_402
num_bytes: 787840546
num_examples: 2000
- name: FreeForm_403
num_bytes: 780239764
num_examples: 2000
- name: FreeForm_404
num_bytes: 782720911
num_examples: 2000
- name: FreeForm_405
num_bytes: 776535548
num_examples: 2000
- name: FreeForm_406
num_bytes: 787828032
num_examples: 2000
- name: FreeForm_407
num_bytes: 781632121
num_examples: 2000
- name: FreeForm_408
num_bytes: 779713575
num_examples: 2000
- name: FreeForm_409
num_bytes: 777632320
num_examples: 2000
- name: FreeForm_410
num_bytes: 784686001
num_examples: 2000
- name: FreeForm_411
num_bytes: 777486756
num_examples: 2000
- name: FreeForm_412
num_bytes: 772228765
num_examples: 2000
- name: FreeForm_413
num_bytes: 781168258
num_examples: 2000
- name: FreeForm_414
num_bytes: 783339876
num_examples: 2000
- name: FreeForm_415
num_bytes: 783962079
num_examples: 2000
- name: FreeForm_416
num_bytes: 775476703
num_examples: 2000
- name: FreeForm_417
num_bytes: 780115603
num_examples: 2000
- name: FreeForm_418
num_bytes: 774555481
num_examples: 2000
- name: FreeForm_419
num_bytes: 771392249
num_examples: 2000
- name: FreeForm_420
num_bytes: 781647966
num_examples: 2000
- name: FreeForm_421
num_bytes: 778569366
num_examples: 2000
- name: FreeForm_422
num_bytes: 777075807
num_examples: 2000
- name: FreeForm_423
num_bytes: 781344221
num_examples: 2000
- name: FreeForm_424
num_bytes: 778153065
num_examples: 2000
- name: FreeForm_425
num_bytes: 787571467
num_examples: 2000
- name: FreeForm_426
num_bytes: 777826298
num_examples: 2000
- name: FreeForm_427
num_bytes: 782019034
num_examples: 2000
- name: FreeForm_428
num_bytes: 784610271
num_examples: 2000
- name: FreeForm_429
num_bytes: 777021882
num_examples: 2000
- name: FreeForm_430
num_bytes: 786138346
num_examples: 2000
- name: FreeForm_431
num_bytes: 785894029
num_examples: 2000
- name: FreeForm_432
num_bytes: 779304938
num_examples: 2000
- name: FreeForm_433
num_bytes: 777969203
num_examples: 2000
- name: FreeForm_434
num_bytes: 773402571
num_examples: 2000
- name: FreeForm_435
num_bytes: 780152853
num_examples: 2000
- name: FreeForm_436
num_bytes: 771653351
num_examples: 2000
- name: FreeForm_437
num_bytes: 782926012
num_examples: 2000
- name: FreeForm_438
num_bytes: 777969831
num_examples: 2000
- name: FreeForm_439
num_bytes: 777857001
num_examples: 2000
- name: FreeForm_440
num_bytes: 779516719
num_examples: 2000
- name: FreeForm_441
num_bytes: 770860698
num_examples: 2000
- name: FreeForm_442
num_bytes: 778712706
num_examples: 2000
- name: FreeForm_443
num_bytes: 780437949
num_examples: 2000
- name: FreeForm_444
num_bytes: 778493719
num_examples: 2000
- name: FreeForm_445
num_bytes: 776648110
num_examples: 2000
- name: FreeForm_446
num_bytes: 769735495
num_examples: 2000
- name: FreeForm_447
num_bytes: 784614251
num_examples: 2000
- name: FreeForm_448
num_bytes: 771427209
num_examples: 2000
- name: FreeForm_449
num_bytes: 776166819
num_examples: 2000
- name: FreeForm_450
num_bytes: 779663498
num_examples: 2000
- name: FreeForm_451
num_bytes: 785115162
num_examples: 2000
- name: FreeForm_452
num_bytes: 777569106
num_examples: 2000
- name: FreeForm_453
num_bytes: 773227129
num_examples: 2000
- name: FreeForm_454
num_bytes: 784237299
num_examples: 2000
- name: FreeForm_455
num_bytes: 790367726
num_examples: 2000
- name: FreeForm_456
num_bytes: 776917540
num_examples: 2000
- name: FreeForm_457
num_bytes: 768702375
num_examples: 2000
- name: FreeForm_458
num_bytes: 770524982
num_examples: 2000
- name: FreeForm_459
num_bytes: 776194088
num_examples: 2000
- name: FreeForm_460
num_bytes: 775613539
num_examples: 2000
- name: FreeForm_461
num_bytes: 769735178
num_examples: 2000
- name: FreeForm_462
num_bytes: 777259156
num_examples: 2000
- name: FreeForm_463
num_bytes: 780338974
num_examples: 2000
- name: FreeForm_464
num_bytes: 774765369
num_examples: 2000
- name: FreeForm_465
num_bytes: 769747692
num_examples: 2000
- name: FreeForm_466
num_bytes: 778452223
num_examples: 2000
- name: FreeForm_467
num_bytes: 774984225
num_examples: 2000
- name: FreeForm_468
num_bytes: 785453416
num_examples: 2000
- name: FreeForm_469
num_bytes: 779253577
num_examples: 2000
- name: FreeForm_470
num_bytes: 780377502
num_examples: 2000
- name: FreeForm_471
num_bytes: 783077732
num_examples: 2000
- name: FreeForm_472
num_bytes: 785213723
num_examples: 2000
- name: FreeForm_473
num_bytes: 789489498
num_examples: 2000
- name: FreeForm_474
num_bytes: 779887855
num_examples: 2000
- name: FreeForm_475
num_bytes: 779109501
num_examples: 2000
- name: FreeForm_476
num_bytes: 777161502
num_examples: 2000
- name: FreeForm_477
num_bytes: 786138446
num_examples: 2000
- name: FreeForm_478
num_bytes: 780123030
num_examples: 2000
- name: FreeForm_479
num_bytes: 778752736
num_examples: 2000
- name: FreeForm_480
num_bytes: 781791235
num_examples: 2000
- name: FreeForm_481
num_bytes: 773626176
num_examples: 2000
- name: FreeForm_482
num_bytes: 777106374
num_examples: 2000
- name: FreeForm_483
num_bytes: 778648646
num_examples: 2000
- name: FreeForm_484
num_bytes: 773997685
num_examples: 2000
- name: FreeForm_485
num_bytes: 779349068
num_examples: 2000
- name: FreeForm_486
num_bytes: 777967164
num_examples: 2000
- name: FreeForm_487
num_bytes: 778535239
num_examples: 2000
- name: FreeForm_488
num_bytes: 773178194
num_examples: 2000
- name: FreeForm_489
num_bytes: 774663901
num_examples: 2000
- name: FreeForm_490
num_bytes: 769685602
num_examples: 2000
- name: FreeForm_491
num_bytes: 767328694
num_examples: 2000
- name: FreeForm_492
num_bytes: 782095429
num_examples: 2000
- name: FreeForm_493
num_bytes: 777160434
num_examples: 2000
- name: FreeForm_494
num_bytes: 772991887
num_examples: 2000
- name: FreeForm_495
num_bytes: 787353950
num_examples: 2000
- name: FreeForm_496
num_bytes: 781350713
num_examples: 2000
- name: FreeForm_497
num_bytes: 768853828
num_examples: 2000
- name: FreeForm_498
num_bytes: 784087657
num_examples: 2000
- name: FreeForm_499
num_bytes: 782456509
num_examples: 2000
- name: FreeForm_500
num_bytes: 777017570
num_examples: 2000
- name: FreeForm_501
num_bytes: 781913684
num_examples: 2000
- name: FreeForm_502
num_bytes: 773513583
num_examples: 2000
- name: FreeForm_503
num_bytes: 775880907
num_examples: 2000
- name: FreeForm_504
num_bytes: 776608994
num_examples: 2000
- name: FreeForm_505
num_bytes: 778612716
num_examples: 2000
- name: FreeForm_506
num_bytes: 782017623
num_examples: 2000
- name: FreeForm_507
num_bytes: 778617412
num_examples: 2000
- name: FreeForm_508
num_bytes: 775370779
num_examples: 2000
- name: FreeForm_509
num_bytes: 783112835
num_examples: 2000
- name: FreeForm_510
num_bytes: 789052066
num_examples: 2000
- name: FreeForm_511
num_bytes: 785606342
num_examples: 2000
- name: FreeForm_512
num_bytes: 774571155
num_examples: 2000
- name: FreeForm_513
num_bytes: 780106960
num_examples: 2000
- name: FreeForm_514
num_bytes: 785882120
num_examples: 2000
- name: FreeForm_515
num_bytes: 780484543
num_examples: 2000
- name: FreeForm_945
num_bytes: 774260507
num_examples: 2000
- name: FreeForm_819
num_bytes: 779239265
num_examples: 2000
- name: FreeForm_756
num_bytes: 780489081
num_examples: 2000
- name: FreeForm_693
num_bytes: 776579782
num_examples: 2000
- name: FreeForm_567
num_bytes: 776096080
num_examples: 2000
- name: FreeForm_516
num_bytes: 773344680
num_examples: 2000
- name: FreeForm_630
num_bytes: 783509886
num_examples: 2000
- name: FreeForm_694
num_bytes: 779623249
num_examples: 2000
- name: FreeForm_757
num_bytes: 767338389
num_examples: 2000
- name: FreeForm_882
num_bytes: 782415551
num_examples: 2000
- name: FreeForm_517
num_bytes: 783601914
num_examples: 2000
- name: FreeForm_568
num_bytes: 775282456
num_examples: 2000
- name: FreeForm_695
num_bytes: 783766613
num_examples: 2000
- name: FreeForm_883
num_bytes: 781822183
num_examples: 2000
- name: FreeForm_946
num_bytes: 780880266
num_examples: 2000
- name: FreeForm_758
num_bytes: 776398014
num_examples: 2000
- name: FreeForm_820
num_bytes: 778350650
num_examples: 2000
- name: FreeForm_518
num_bytes: 796168139
num_examples: 2000
- name: FreeForm_696
num_bytes: 776163508
num_examples: 2000
- name: FreeForm_631
num_bytes: 782324850
num_examples: 2000
- name: FreeForm_884
num_bytes: 778744072
num_examples: 2000
- name: FreeForm_947
num_bytes: 778033288
num_examples: 2000
- name: FreeForm_570
num_bytes: 787492732
num_examples: 2000
- name: FreeForm_759
num_bytes: 783435623
num_examples: 2000
- name: FreeForm_519
num_bytes: 775988743
num_examples: 2000
- name: FreeForm_821
num_bytes: 780246826
num_examples: 2000
- name: FreeForm_697
num_bytes: 780912390
num_examples: 2000
- name: FreeForm_885
num_bytes: 776117068
num_examples: 2000
- name: FreeForm_520
num_bytes: 771684897
num_examples: 2000
- name: FreeForm_632
num_bytes: 786944594
num_examples: 2000
- name: FreeForm_760
num_bytes: 776225469
num_examples: 2000
- name: FreeForm_571
num_bytes: 769574296
num_examples: 2000
- name: FreeForm_948
num_bytes: 770722985
num_examples: 2000
- name: FreeForm_886
num_bytes: 787147597
num_examples: 2000
- name: FreeForm_822
num_bytes: 775358530
num_examples: 2000
- name: FreeForm_698
num_bytes: 779112403
num_examples: 2000
- name: FreeForm_521
num_bytes: 781760945
num_examples: 2000
- name: FreeForm_761
num_bytes: 770056124
num_examples: 2000
- name: FreeForm_633
num_bytes: 781835260
num_examples: 2000
- name: FreeForm_949
num_bytes: 776230854
num_examples: 2000
- name: FreeForm_823
num_bytes: 781883671
num_examples: 2000
- name: FreeForm_572
num_bytes: 768804901
num_examples: 2000
- name: FreeForm_699
num_bytes: 779957156
num_examples: 2000
- name: FreeForm_522
num_bytes: 775135129
num_examples: 2000
- name: FreeForm_762
num_bytes: 776447051
num_examples: 2000
- name: FreeForm_950
num_bytes: 781469625
num_examples: 2000
- name: FreeForm_824
num_bytes: 780508400
num_examples: 2000
- name: FreeForm_700
num_bytes: 777369380
num_examples: 2000
- name: FreeForm_523
num_bytes: 785017217
num_examples: 2000
- name: FreeForm_634
num_bytes: 782217304
num_examples: 2000
- name: FreeForm_763
num_bytes: 785472053
num_examples: 2000
- name: FreeForm_951
num_bytes: 771779911
num_examples: 2000
- name: FreeForm_889
num_bytes: 775639275
num_examples: 2000
- name: FreeForm_701
num_bytes: 783031149
num_examples: 2000
- name: FreeForm_635
num_bytes: 779398869
num_examples: 2000
- name: FreeForm_764
num_bytes: 770298257
num_examples: 2000
- name: FreeForm_952
num_bytes: 778449275
num_examples: 2000
- name: FreeForm_525
num_bytes: 773918245
num_examples: 2000
- name: FreeForm_890
num_bytes: 775934365
num_examples: 2000
- name: FreeForm_636
num_bytes: 779227692
num_examples: 2000
- name: FreeForm_826
num_bytes: 769907967
num_examples: 2000
- name: FreeForm_765
num_bytes: 784297610
num_examples: 2000
- name: FreeForm_953
num_bytes: 774721939
num_examples: 2000
- name: FreeForm_526
num_bytes: 779985761
num_examples: 2000
- name: FreeForm_576
num_bytes: 770608243
num_examples: 2000
- name: FreeForm_637
num_bytes: 785632025
num_examples: 2000
- name: FreeForm_891
num_bytes: 777053254
num_examples: 2000
- name: FreeForm_703
num_bytes: 788237995
num_examples: 2000
- name: FreeForm_527
num_bytes: 776190530
num_examples: 2000
- name: FreeForm_704
num_bytes: 789219802
num_examples: 2000
- name: FreeForm_577
num_bytes: 772767960
num_examples: 2000
- name: FreeForm_828
num_bytes: 775337334
num_examples: 2000
- name: FreeForm_767
num_bytes: 776371370
num_examples: 2000
- name: FreeForm_892
num_bytes: 784395260
num_examples: 2000
- name: FreeForm_955
num_bytes: 780198276
num_examples: 2000
- name: FreeForm_528
num_bytes: 786475368
num_examples: 2000
- name: FreeForm_705
num_bytes: 779637110
num_examples: 2000
- name: FreeForm_768
num_bytes: 778165939
num_examples: 2000
- name: FreeForm_829
num_bytes: 775226242
num_examples: 2000
- name: FreeForm_639
num_bytes: 776620565
num_examples: 2000
- name: FreeForm_893
num_bytes: 776777875
num_examples: 2000
- name: FreeForm_706
num_bytes: 776888369
num_examples: 2000
- name: FreeForm_769
num_bytes: 773177470
num_examples: 2000
- name: FreeForm_640
num_bytes: 775416285
num_examples: 2000
- name: FreeForm_830
num_bytes: 773121368
num_examples: 2000
- name: FreeForm_894
num_bytes: 771005496
num_examples: 2000
- name: FreeForm_957
num_bytes: 779298875
num_examples: 2000
- name: FreeForm_707
num_bytes: 786290237
num_examples: 2000
- name: FreeForm_530
num_bytes: 775067308
num_examples: 2000
- name: FreeForm_770
num_bytes: 781455541
num_examples: 2000
- name: FreeForm_641
num_bytes: 788867090
num_examples: 2000
- name: FreeForm_831
num_bytes: 777292141
num_examples: 2000
- name: FreeForm_958
num_bytes: 781154507
num_examples: 2000
- name: FreeForm_895
num_bytes: 781470066
num_examples: 2000
- name: FreeForm_578
num_bytes: 774956592
num_examples: 2000
- name: FreeForm_642
num_bytes: 782036346
num_examples: 2000
- name: FreeForm_832
num_bytes: 778161296
num_examples: 2000
- name: FreeForm_959
num_bytes: 785312871
num_examples: 2000
- name: FreeForm_896
num_bytes: 782183638
num_examples: 2000
- name: FreeForm_532
num_bytes: 782334295
num_examples: 2000
- name: FreeForm_579
num_bytes: 782162008
num_examples: 2000
- name: FreeForm_772
num_bytes: 783149924
num_examples: 2000
- name: FreeForm_897
num_bytes: 782736534
num_examples: 2000
- name: FreeForm_833
num_bytes: 781833165
num_examples: 2000
- name: FreeForm_533
num_bytes: 780836381
num_examples: 2000
- name: FreeForm_580
num_bytes: 779785922
num_examples: 2000
- name: FreeForm_644
num_bytes: 780852601
num_examples: 2000
- name: FreeForm_898
num_bytes: 782375626
num_examples: 2000
- name: FreeForm_834
num_bytes: 780238790
num_examples: 2000
- name: FreeForm_534
num_bytes: 787102239
num_examples: 2000
- name: FreeForm_774
num_bytes: 783405628
num_examples: 2000
- name: FreeForm_962
num_bytes: 783536879
num_examples: 2000
- name: FreeForm_835
num_bytes: 782146637
num_examples: 2000
- name: FreeForm_899
num_bytes: 777879403
num_examples: 2000
- name: FreeForm_581
num_bytes: 776043510
num_examples: 2000
- name: FreeForm_645
num_bytes: 777671003
num_examples: 2000
- name: FreeForm_535
num_bytes: 783503960
num_examples: 2000
- name: FreeForm_711
num_bytes: 786589601
num_examples: 2000
- name: FreeForm_775
num_bytes: 789032807
num_examples: 2000
- name: FreeForm_536
num_bytes: 780048605
num_examples: 2000
- name: FreeForm_836
num_bytes: 785559140
num_examples: 2000
- name: FreeForm_963
num_bytes: 768897706
num_examples: 2000
- name: FreeForm_900
num_bytes: 775545516
num_examples: 2000
- name: FreeForm_582
num_bytes: 776768083
num_examples: 2000
- name: FreeForm_537
num_bytes: 778920774
num_examples: 2000
- name: FreeForm_647
num_bytes: 789247154
num_examples: 2000
- name: FreeForm_837
num_bytes: 770927735
num_examples: 2000
- name: FreeForm_964
num_bytes: 777374122
num_examples: 2000
- name: FreeForm_583
num_bytes: 771971182
num_examples: 2000
- name: FreeForm_648
num_bytes: 790481101
num_examples: 2000
- name: FreeForm_714
num_bytes: 782357883
num_examples: 2000
- name: FreeForm_902
num_bytes: 790009775
num_examples: 2000
- name: FreeForm_966
num_bytes: 772852829
num_examples: 2000
- name: FreeForm_839
num_bytes: 774956755
num_examples: 2000
- name: FreeForm_840
num_bytes: 779381412
num_examples: 2000
- name: FreeForm_780
num_bytes: 782526085
num_examples: 2000
- name: FreeForm_905
num_bytes: 782008696
num_examples: 2000
- name: FreeForm_781
num_bytes: 777036517
num_examples: 2000
- name: FreeForm_542
num_bytes: 773384990
num_examples: 2000
- name: FreeForm_717
num_bytes: 787188315
num_examples: 2000
- name: FreeForm_587
num_bytes: 778047238
num_examples: 2000
- name: FreeForm_906
num_bytes: 782238585
num_examples: 2000
- name: FreeForm_782
num_bytes: 773185949
num_examples: 2000
- name: FreeForm_543
num_bytes: 780021022
num_examples: 2000
- name: FreeForm_970
num_bytes: 770399749
num_examples: 2000
- name: FreeForm_653
num_bytes: 779105454
num_examples: 2000
- name: FreeForm_907
num_bytes: 786301923
num_examples: 2000
- name: FreeForm_843
num_bytes: 771553141
num_examples: 2000
- name: FreeForm_588
num_bytes: 772966947
num_examples: 2000
- name: FreeForm_718
num_bytes: 781844273
num_examples: 2000
- name: FreeForm_783
num_bytes: 773562940
num_examples: 2000
- name: FreeForm_544
num_bytes: 786251287
num_examples: 2000
- name: FreeForm_971
num_bytes: 786415868
num_examples: 2000
- name: FreeForm_908
num_bytes: 775910532
num_examples: 2000
- name: FreeForm_654
num_bytes: 783017867
num_examples: 2000
- name: FreeForm_844
num_bytes: 775618340
num_examples: 2000
- name: FreeForm_719
num_bytes: 790544891
num_examples: 2000
- name: FreeForm_784
num_bytes: 780210834
num_examples: 2000
- name: FreeForm_545
num_bytes: 785852168
num_examples: 2000
- name: FreeForm_972
num_bytes: 780954023
num_examples: 2000
- name: FreeForm_909
num_bytes: 776653719
num_examples: 2000
- name: FreeForm_845
num_bytes: 781950032
num_examples: 2000
- name: FreeForm_785
num_bytes: 785226734
num_examples: 2000
- name: FreeForm_546
num_bytes: 777542887
num_examples: 2000
- name: FreeForm_656
num_bytes: 783321325
num_examples: 2000
- name: FreeForm_973
num_bytes: 777455767
num_examples: 2000
- name: FreeForm_547
num_bytes: 783780578
num_examples: 2000
- name: FreeForm_592
num_bytes: 787979205
num_examples: 2000
- name: FreeForm_657
num_bytes: 779575634
num_examples: 2000
- name: FreeForm_787
num_bytes: 775081104
num_examples: 2000
- name: FreeForm_847
num_bytes: 772847884
num_examples: 2000
- name: FreeForm_593
num_bytes: 786234512
num_examples: 2000
- name: FreeForm_848
num_bytes: 780944350
num_examples: 2000
- name: FreeForm_788
num_bytes: 778812403
num_examples: 2000
- name: FreeForm_723
num_bytes: 774864464
num_examples: 2000
- name: FreeForm_659
num_bytes: 777846993
num_examples: 2000
- name: FreeForm_849
num_bytes: 786936392
num_examples: 2000
- name: FreeForm_594
num_bytes: 778549444
num_examples: 2000
- name: FreeForm_789
num_bytes: 768423047
num_examples: 2000
- name: FreeForm_913
num_bytes: 779432172
num_examples: 2000
- name: FreeForm_660
num_bytes: 778422276
num_examples: 2000
- name: FreeForm_595
num_bytes: 782427799
num_examples: 2000
- name: FreeForm_790
num_bytes: 780306946
num_examples: 2000
- name: FreeForm_977
num_bytes: 783548441
num_examples: 2000
- name: FreeForm_914
num_bytes: 785748185
num_examples: 2000
- name: FreeForm_851
num_bytes: 773099412
num_examples: 2000
- name: FreeForm_552
num_bytes: 775631428
num_examples: 2000
- name: FreeForm_597
num_bytes: 781461768
num_examples: 2000
- name: FreeForm_852
num_bytes: 786171837
num_examples: 2000
- name: FreeForm_662
num_bytes: 776535039
num_examples: 2000
- name: FreeForm_726
num_bytes: 780258276
num_examples: 2000
- name: FreeForm_553
num_bytes: 774446361
num_examples: 2000
- name: FreeForm_598
num_bytes: 776165992
num_examples: 2000
- name: FreeForm_853
num_bytes: 775913169
num_examples: 2000
- name: FreeForm_916
num_bytes: 770512905
num_examples: 2000
- name: FreeForm_663
num_bytes: 779178273
num_examples: 2000
- name: FreeForm_979
num_bytes: 785316308
num_examples: 2000
- name: FreeForm_554
num_bytes: 779043744
num_examples: 2000
- name: FreeForm_555
num_bytes: 774698579
num_examples: 2000
- name: FreeForm_600
num_bytes: 779573136
num_examples: 2000
- name: FreeForm_556
num_bytes: 769993384
num_examples: 2000
- name: FreeForm_981
num_bytes: 775981807
num_examples: 2000
- name: FreeForm_918
num_bytes: 770640072
num_examples: 2000
- name: FreeForm_855
num_bytes: 770971099
num_examples: 2000
- name: FreeForm_601
num_bytes: 783485267
num_examples: 2000
- name: FreeForm_557
num_bytes: 781316695
num_examples: 2000
- name: FreeForm_982
num_bytes: 784171648
num_examples: 2000
- name: FreeForm_919
num_bytes: 781033588
num_examples: 2000
- name: FreeForm_666
num_bytes: 780033756
num_examples: 2000
- name: FreeForm_730
num_bytes: 780928758
num_examples: 2000
- name: FreeForm_558
num_bytes: 773762359
num_examples: 2000
- name: FreeForm_796
num_bytes: 775857969
num_examples: 2000
- name: FreeForm_920
num_bytes: 779264778
num_examples: 2000
- name: FreeForm_603
num_bytes: 779490679
num_examples: 2000
- name: FreeForm_797
num_bytes: 789388543
num_examples: 2000
- name: FreeForm_560
num_bytes: 782833902
num_examples: 2000
- name: FreeForm_798
num_bytes: 782076880
num_examples: 2000
- name: FreeForm_799
num_bytes: 785498285
num_examples: 2000
- name: FreeForm_605
num_bytes: 781535181
num_examples: 2000
- name: FreeForm_986
num_bytes: 784572282
num_examples: 2000
- name: FreeForm_987
num_bytes: 777514807
num_examples: 2000
- name: FreeForm_735
num_bytes: 776604012
num_examples: 2000
- name: FreeForm_924
num_bytes: 781738136
num_examples: 2000
- name: FreeForm_801
num_bytes: 775343161
num_examples: 2000
- name: FreeForm_988
num_bytes: 771394272
num_examples: 2000
- name: FreeForm_607
num_bytes: 784801310
num_examples: 2000
- name: FreeForm_736
num_bytes: 783919547
num_examples: 2000
- name: FreeForm_672
num_bytes: 781282095
num_examples: 2000
- name: FreeForm_925
num_bytes: 779652256
num_examples: 2000
- name: FreeForm_564
num_bytes: 773410204
num_examples: 2000
- name: FreeForm_608
num_bytes: 781207172
num_examples: 2000
- name: FreeForm_737
num_bytes: 780040754
num_examples: 2000
- name: FreeForm_673
num_bytes: 777972399
num_examples: 2000
- name: FreeForm_803
num_bytes: 779807395
num_examples: 2000
- name: FreeForm_926
num_bytes: 783442993
num_examples: 2000
- name: FreeForm_863
num_bytes: 774852302
num_examples: 2000
- name: FreeForm_738
num_bytes: 776190253
num_examples: 2000
- name: FreeForm_674
num_bytes: 781090727
num_examples: 2000
- name: FreeForm_804
num_bytes: 772326881
num_examples: 2000
- name: FreeForm_927
num_bytes: 775964176
num_examples: 2000
- name: FreeForm_864
num_bytes: 781520806
num_examples: 2000
- name: FreeForm_675
num_bytes: 770042796
num_examples: 2000
- name: FreeForm_805
num_bytes: 784368593
num_examples: 2000
- name: FreeForm_611
num_bytes: 782309242
num_examples: 2000
- name: FreeForm_928
num_bytes: 780370958
num_examples: 2000
- name: FreeForm_676
num_bytes: 777603931
num_examples: 2000
- name: FreeForm_865
num_bytes: 783734528
num_examples: 2000
- name: FreeForm_806
num_bytes: 779643778
num_examples: 2000
- name: FreeForm_929
num_bytes: 783765505
num_examples: 2000
- name: FreeForm_993
num_bytes: 774611125
num_examples: 2000
- name: FreeForm_866
num_bytes: 783029894
num_examples: 2000
- name: FreeForm_678
num_bytes: 770092785
num_examples: 2000
- name: FreeForm_930
num_bytes: 780511663
num_examples: 2000
- name: FreeForm_994
num_bytes: 780210180
num_examples: 2000
- name: FreeForm_867
num_bytes: 774361780
num_examples: 2000
- name: FreeForm_807
num_bytes: 778849248
num_examples: 2000
- name: FreeForm_1011
num_bytes: 781122711
num_examples: 2000
- name: FreeForm_931
num_bytes: 778070968
num_examples: 2000
- name: FreeForm_808
num_bytes: 782039889
num_examples: 2000
- name: FreeForm_743
num_bytes: 782929244
num_examples: 2000
- name: FreeForm_995
num_bytes: 781491448
num_examples: 2000
- name: FreeForm_809
num_bytes: 779201674
num_examples: 2000
- name: FreeForm_1012
num_bytes: 784947632
num_examples: 2000
- name: FreeForm_869
num_bytes: 777625531
num_examples: 2000
- name: FreeForm_810
num_bytes: 772386029
num_examples: 2000
- name: FreeForm_616
num_bytes: 782099041
num_examples: 2000
- name: FreeForm_870
num_bytes: 771586766
num_examples: 2000
- name: FreeForm_933
num_bytes: 777819645
num_examples: 2000
- name: FreeForm_811
num_bytes: 773709965
num_examples: 2000
- name: FreeForm_617
num_bytes: 777775291
num_examples: 2000
- name: FreeForm_1014
num_bytes: 776626214
num_examples: 2000
- name: FreeForm_934
num_bytes: 780076532
num_examples: 2000
- name: FreeForm_871
num_bytes: 772742042
num_examples: 2000
- name: FreeForm_682
num_bytes: 772864370
num_examples: 2000
- name: FreeForm_812
num_bytes: 779728479
num_examples: 2000
- name: FreeForm_1015
num_bytes: 776188407
num_examples: 2000
- name: FreeForm_747
num_bytes: 776912983
num_examples: 2000
- name: FreeForm_683
num_bytes: 773662766
num_examples: 2000
- name: FreeForm_872
num_bytes: 781095791
num_examples: 2000
- name: FreeForm_1016
num_bytes: 773422235
num_examples: 2000
- name: FreeForm_619
num_bytes: 781384539
num_examples: 2000
- name: FreeForm_748
num_bytes: 794178596
num_examples: 2000
- name: FreeForm_996
num_bytes: 776159757
num_examples: 2000
- name: FreeForm_936
num_bytes: 783195036
num_examples: 2000
- name: FreeForm_873
num_bytes: 783526678
num_examples: 2000
- name: FreeForm_814
num_bytes: 784020960
num_examples: 2000
- name: FreeForm_620
num_bytes: 777669159
num_examples: 2000
- name: FreeForm_937
num_bytes: 784288911
num_examples: 2000
- name: FreeForm_874
num_bytes: 779265520
num_examples: 2000
- name: FreeForm_815
num_bytes: 772783609
num_examples: 2000
- name: FreeForm_685
num_bytes: 776856277
num_examples: 2000
- name: FreeForm_750
num_bytes: 787248405
num_examples: 2000
- name: FreeForm_998
num_bytes: 780476434
num_examples: 2000
- name: FreeForm_938
num_bytes: 773418408
num_examples: 2000
- name: FreeForm_816
num_bytes: 781409447
num_examples: 2000
- name: FreeForm_622
num_bytes: 784580108
num_examples: 2000
- name: FreeForm_751
num_bytes: 777930957
num_examples: 2000
- name: FreeForm_876
num_bytes: 776360852
num_examples: 2000
- name: FreeForm_939
num_bytes: 777865106
num_examples: 2000
- name: FreeForm_817
num_bytes: 780160515
num_examples: 2000
- name: FreeForm_752
num_bytes: 777670340
num_examples: 2000
- name: FreeForm_1020
num_bytes: 775927785
num_examples: 2000
- name: FreeForm_624
num_bytes: 784691651
num_examples: 2000
- name: FreeForm_1001
num_bytes: 784203264
num_examples: 2000
- name: FreeForm_1071
num_bytes: 785925715
num_examples: 2000
- name: FreeForm_1072
num_bytes: 774079517
num_examples: 2000
- name: FreeForm_1022
num_bytes: 784309204
num_examples: 2000
- name: FreeForm_755
num_bytes: 779965249
num_examples: 2000
- name: FreeForm_626
num_bytes: 778811345
num_examples: 2000
- name: FreeForm_690
num_bytes: 781765116
num_examples: 2000
- name: FreeForm_1003
num_bytes: 780150305
num_examples: 2000
- name: FreeForm_1023
num_bytes: 771413314
num_examples: 2000
- name: FreeForm_880
num_bytes: 785551287
num_examples: 2000
- name: FreeForm_627
num_bytes: 790354930
num_examples: 2000
- name: FreeForm_1004
num_bytes: 782295953
num_examples: 2000
- name: FreeForm_1074
num_bytes: 769854196
num_examples: 2000
- name: FreeForm_1024
num_bytes: 775492572
num_examples: 2000
- name: FreeForm_944
num_bytes: 785364115
num_examples: 2000
- name: FreeForm_881
num_bytes: 782271712
num_examples: 2000
- name: FreeForm_1135
num_bytes: 769193624
num_examples: 2000
- name: FreeForm_692
num_bytes: 783918813
num_examples: 2000
- name: FreeForm_1075
num_bytes: 776652655
num_examples: 2000
- name: FreeForm_1025
num_bytes: 780779154
num_examples: 2000
- name: FreeForm_1197
num_bytes: 779317101
num_examples: 2000
- name: FreeForm_1260
num_bytes: 762208379
num_examples: 2000
- name: FreeForm_629
num_bytes: 777468540
num_examples: 2000
- name: FreeForm_1136
num_bytes: 775585163
num_examples: 2000
- name: FreeForm_1006
num_bytes: 779937630
num_examples: 2000
- name: FreeForm_1261
num_bytes: 783256566
num_examples: 2000
- name: FreeForm_1198
num_bytes: 771359382
num_examples: 2000
- name: FreeForm_1386
num_bytes: 772649046
num_examples: 2000
- name: FreeForm_1137
num_bytes: 780582530
num_examples: 2000
- name: FreeForm_1007
num_bytes: 774445784
num_examples: 2000
- name: FreeForm_1077
num_bytes: 775372762
num_examples: 2000
- name: FreeForm_1262
num_bytes: 778299396
num_examples: 2000
- name: FreeForm_1324
num_bytes: 775911927
num_examples: 2000
- name: FreeForm_1387
num_bytes: 773904836
num_examples: 2000
- name: FreeForm_1138
num_bytes: 773720801
num_examples: 2000
- name: FreeForm_1449
num_bytes: 775798702
num_examples: 2000
- name: FreeForm_1200
num_bytes: 774570757
num_examples: 2000
- name: FreeForm_1388
num_bytes: 772318981
num_examples: 2000
- name: FreeForm_1078
num_bytes: 772713822
num_examples: 2000
- name: FreeForm_1139
num_bytes: 775735549
num_examples: 2000
- name: FreeForm_1450
num_bytes: 769208143
num_examples: 2000
- name: FreeForm_1326
num_bytes: 777633838
num_examples: 2000
- name: FreeForm_1201
num_bytes: 774915951
num_examples: 2000
- name: FreeForm_1389
num_bytes: 770498447
num_examples: 2000
- name: FreeForm_1264
num_bytes: 776260201
num_examples: 2000
- name: FreeForm_1140
num_bytes: 786338430
num_examples: 2000
- name: FreeForm_1451
num_bytes: 775905007
num_examples: 2000
- name: FreeForm_1327
num_bytes: 767215517
num_examples: 2000
- name: FreeForm_1202
num_bytes: 776907746
num_examples: 2000
- name: FreeForm_1030
num_bytes: 770330894
num_examples: 2000
- name: FreeForm_1390
num_bytes: 773078672
num_examples: 2000
- name: FreeForm_1080
num_bytes: 776994960
num_examples: 2000
- name: FreeForm_1141
num_bytes: 783741241
num_examples: 2000
- name: FreeForm_1452
num_bytes: 775233498
num_examples: 2000
- name: FreeForm_1328
num_bytes: 779688855
num_examples: 2000
- name: FreeForm_1203
num_bytes: 778731467
num_examples: 2000
- name: FreeForm_1391
num_bytes: 778148236
num_examples: 2000
- name: FreeForm_1142
num_bytes: 778592252
num_examples: 2000
- name: FreeForm_1329
num_bytes: 780980202
num_examples: 2000
- name: FreeForm_1032
num_bytes: 765832292
num_examples: 2000
- name: FreeForm_1392
num_bytes: 778228973
num_examples: 2000
- name: FreeForm_1143
num_bytes: 779686958
num_examples: 2000
- name: FreeForm_1266
num_bytes: 780267266
num_examples: 2000
- name: FreeForm_1454
num_bytes: 771388767
num_examples: 2000
- name: FreeForm_1033
num_bytes: 785405397
num_examples: 2000
- name: FreeForm_1331
num_bytes: 773303535
num_examples: 2000
- name: FreeForm_1455
num_bytes: 772270994
num_examples: 2000
- name: FreeForm_1084
num_bytes: 780937120
num_examples: 2000
- name: FreeForm_1394
num_bytes: 779912517
num_examples: 2000
- name: FreeForm_1034
num_bytes: 785037979
num_examples: 2000
- name: FreeForm_1332
num_bytes: 775214220
num_examples: 2000
- name: FreeForm_1456
num_bytes: 773902347
num_examples: 2000
- name: FreeForm_1268
num_bytes: 776083060
num_examples: 2000
- name: FreeForm_1207
num_bytes: 775083925
num_examples: 2000
- name: FreeForm_1395
num_bytes: 778627455
num_examples: 2000
- name: FreeForm_1035
num_bytes: 780850165
num_examples: 2000
- name: FreeForm_1333
num_bytes: 776771157
num_examples: 2000
- name: FreeForm_1457
num_bytes: 771241476
num_examples: 2000
- name: FreeForm_1086
num_bytes: 769890365
num_examples: 2000
- name: FreeForm_1147
num_bytes: 776637729
num_examples: 2000
- name: FreeForm_1396
num_bytes: 777785894
num_examples: 2000
- name: FreeForm_1334
num_bytes: 784289993
num_examples: 2000
- name: FreeForm_1458
num_bytes: 776626943
num_examples: 2000
- name: FreeForm_1087
num_bytes: 781254663
num_examples: 2000
- name: FreeForm_1148
num_bytes: 773662440
num_examples: 2000
- name: FreeForm_1397
num_bytes: 780426125
num_examples: 2000
- name: FreeForm_1335
num_bytes: 770894343
num_examples: 2000
- name: FreeForm_1459
num_bytes: 770376933
num_examples: 2000
- name: FreeForm_1271
num_bytes: 781843080
num_examples: 2000
- name: FreeForm_1149
num_bytes: 776995200
num_examples: 2000
- name: FreeForm_1210
num_bytes: 772949457
num_examples: 2000
- name: FreeForm_1150
num_bytes: 778048049
num_examples: 2000
- name: FreeForm_1272
num_bytes: 770433073
num_examples: 2000
- name: FreeForm_1461
num_bytes: 772615250
num_examples: 2000
- name: FreeForm_1151
num_bytes: 776289624
num_examples: 2000
- name: FreeForm_1273
num_bytes: 770953464
num_examples: 2000
- name: FreeForm_1212
num_bytes: 780575601
num_examples: 2000
- name: FreeForm_1090
num_bytes: 770057581
num_examples: 2000
- name: FreeForm_1400
num_bytes: 775894925
num_examples: 2000
- name: FreeForm_1152
num_bytes: 774100579
num_examples: 2000
- name: FreeForm_1274
num_bytes: 773088951
num_examples: 2000
- name: FreeForm_1091
num_bytes: 778261716
num_examples: 2000
- name: FreeForm_1401
num_bytes: 769327493
num_examples: 2000
- name: FreeForm_1153
num_bytes: 769264686
num_examples: 2000
- name: FreeForm_1275
num_bytes: 773463433
num_examples: 2000
- name: FreeForm_1214
num_bytes: 773727975
num_examples: 2000
- name: FreeForm_1464
num_bytes: 770724265
num_examples: 2000
- name: FreeForm_1340
num_bytes: 770246906
num_examples: 2000
- name: FreeForm_1043
num_bytes: 775871564
num_examples: 2000
- name: FreeForm_1276
num_bytes: 779678508
num_examples: 2000
- name: FreeForm_1403
num_bytes: 785594363
num_examples: 2000
- name: FreeForm_1215
num_bytes: 773708158
num_examples: 2000
- name: FreeForm_1093
num_bytes: 781403783
num_examples: 2000
- name: FreeForm_1044
num_bytes: 782580437
num_examples: 2000
- name: FreeForm_1277
num_bytes: 768784213
num_examples: 2000
- name: FreeForm_1216
num_bytes: 776703123
num_examples: 2000
- name: FreeForm_1094
num_bytes: 782325753
num_examples: 2000
- name: FreeForm_1278
num_bytes: 778353689
num_examples: 2000
- name: FreeForm_1217
num_bytes: 777963465
num_examples: 2000
- name: FreeForm_1405
num_bytes: 775831012
num_examples: 2000
- name: FreeForm_1467
num_bytes: 773903809
num_examples: 2000
- name: FreeForm_1157
num_bytes: 780808451
num_examples: 2000
- name: FreeForm_1406
num_bytes: 770037870
num_examples: 2000
- name: FreeForm_1343
num_bytes: 779944703
num_examples: 2000
- name: FreeForm_1218
num_bytes: 775185803
num_examples: 2000
- name: FreeForm_1468
num_bytes: 774969577
num_examples: 2000
- name: FreeForm_1158
num_bytes: 771236817
num_examples: 2000
- name: FreeForm_1407
num_bytes: 777805253
num_examples: 2000
- name: FreeForm_1344
num_bytes: 772506110
num_examples: 2000
- name: FreeForm_1047
num_bytes: 771668726
num_examples: 2000
- name: FreeForm_1219
num_bytes: 774695485
num_examples: 2000
- name: FreeForm_1469
num_bytes: 773152862
num_examples: 2000
- name: FreeForm_1345
num_bytes: 774356861
num_examples: 2000
- name: FreeForm_1281
num_bytes: 769397422
num_examples: 2000
- name: FreeForm_1220
num_bytes: 777646479
num_examples: 2000
- name: FreeForm_1048
num_bytes: 774170661
num_examples: 2000
- name: FreeForm_1098
num_bytes: 782169769
num_examples: 2000
- name: FreeForm_1160
num_bytes: 780634390
num_examples: 2000
- name: FreeForm_1346
num_bytes: 774179081
num_examples: 2000
- name: FreeForm_1282
num_bytes: 772417081
num_examples: 2000
- name: FreeForm_1471
num_bytes: 772795661
num_examples: 2000
- name: FreeForm_1410
num_bytes: 774443858
num_examples: 2000
- name: FreeForm_1472
num_bytes: 779405331
num_examples: 2000
- name: FreeForm_1284
num_bytes: 782471252
num_examples: 2000
- name: FreeForm_1348
num_bytes: 778145756
num_examples: 2000
- name: FreeForm_1223
num_bytes: 783097614
num_examples: 2000
- name: FreeForm_1163
num_bytes: 786078851
num_examples: 2000
- name: FreeForm_1473
num_bytes: 779354512
num_examples: 2000
- name: FreeForm_1285
num_bytes: 782743833
num_examples: 2000
- name: FreeForm_1349
num_bytes: 773866766
num_examples: 2000
- name: FreeForm_1101
num_bytes: 780064474
num_examples: 2000
- name: FreeForm_1224
num_bytes: 779713701
num_examples: 2000
- name: FreeForm_1164
num_bytes: 785826513
num_examples: 2000
- name: FreeForm_1413
num_bytes: 771270626
num_examples: 2000
- name: FreeForm_1225
num_bytes: 789341153
num_examples: 2000
- name: FreeForm_1286
num_bytes: 783040862
num_examples: 2000
- name: FreeForm_1165
num_bytes: 782794133
num_examples: 2000
- name: FreeForm_1414
num_bytes: 776277188
num_examples: 2000
- name: FreeForm_1053
num_bytes: 775020295
num_examples: 2000
- name: FreeForm_1287
num_bytes: 774282496
num_examples: 2000
- name: FreeForm_1351
num_bytes: 777217979
num_examples: 2000
- name: FreeForm_1166
num_bytes: 782196546
num_examples: 2000
- name: FreeForm_1415
num_bytes: 773801330
num_examples: 2000
- name: FreeForm_1227
num_bytes: 781777755
num_examples: 2000
- name: FreeForm_1054
num_bytes: 770350768
num_examples: 2000
- name: FreeForm_1167
num_bytes: 772643185
num_examples: 2000
- name: FreeForm_1288
num_bytes: 786282948
num_examples: 2000
- name: FreeForm_1476
num_bytes: 781887411
num_examples: 2000
- name: FreeForm_1416
num_bytes: 785772864
num_examples: 2000
- name: FreeForm_1228
num_bytes: 782310719
num_examples: 2000
- name: FreeForm_1168
num_bytes: 778463665
num_examples: 2000
- name: FreeForm_1353
num_bytes: 774098738
num_examples: 2000
- name: FreeForm_1477
num_bytes: 770072431
num_examples: 2000
- name: FreeForm_1105
num_bytes: 780584723
num_examples: 2000
- name: FreeForm_1417
num_bytes: 770555258
num_examples: 2000
- name: FreeForm_1229
num_bytes: 766386559
num_examples: 2000
- name: FreeForm_1056
num_bytes: 777845089
num_examples: 2000
- name: FreeForm_1354
num_bytes: 776296757
num_examples: 2000
- name: FreeForm_1230
num_bytes: 768761136
num_examples: 2000
- name: FreeForm_1057
num_bytes: 770679050
num_examples: 2000
- name: FreeForm_1170
num_bytes: 784981283
num_examples: 2000
- name: FreeForm_1291
num_bytes: 775560769
num_examples: 2000
- name: FreeForm_1107
num_bytes: 774133706
num_examples: 2000
- name: FreeForm_1419
num_bytes: 772063671
num_examples: 2000
- name: FreeForm_1479
num_bytes: 768129541
num_examples: 2000
- name: FreeForm_1231
num_bytes: 777992198
num_examples: 2000
- name: FreeForm_1058
num_bytes: 778022181
num_examples: 2000
- name: FreeForm_1171
num_bytes: 774484635
num_examples: 2000
- name: FreeForm_1420
num_bytes: 784674844
num_examples: 2000
- name: FreeForm_1232
num_bytes: 774283767
num_examples: 2000
- name: FreeForm_1059
num_bytes: 770082646
num_examples: 2000
- name: FreeForm_1293
num_bytes: 777774009
num_examples: 2000
- name: FreeForm_1357
num_bytes: 782812482
num_examples: 2000
- name: FreeForm_1481
num_bytes: 772278059
num_examples: 2000
- name: FreeForm_1060
num_bytes: 780207820
num_examples: 2000
- name: FreeForm_1294
num_bytes: 772434873
num_examples: 2000
- name: FreeForm_1173
num_bytes: 772136852
num_examples: 2000
- name: FreeForm_1358
num_bytes: 779244683
num_examples: 2000
- name: FreeForm_1061
num_bytes: 783705532
num_examples: 2000
- name: FreeForm_1234
num_bytes: 769879163
num_examples: 2000
- name: FreeForm_1295
num_bytes: 778394871
num_examples: 2000
- name: FreeForm_1359
num_bytes: 776358524
num_examples: 2000
- name: FreeForm_1062
num_bytes: 772853747
num_examples: 2000
- name: FreeForm_1296
num_bytes: 772331030
num_examples: 2000
- name: FreeForm_1297
num_bytes: 772141225
num_examples: 2000
- name: FreeForm_1112
num_bytes: 771006309
num_examples: 2000
- name: FreeForm_1484
num_bytes: 775157027
num_examples: 2000
- name: FreeForm_1064
num_bytes: 777683941
num_examples: 2000
- name: FreeForm_1298
num_bytes: 777662981
num_examples: 2000
- name: FreeForm_1113
num_bytes: 773454098
num_examples: 2000
- name: FreeForm_1177
num_bytes: 773276736
num_examples: 2000
- name: FreeForm_1362
num_bytes: 776932286
num_examples: 2000
- name: FreeForm_1485
num_bytes: 782890005
num_examples: 2000
- name: FreeForm_1363
num_bytes: 768839554
num_examples: 2000
- name: FreeForm_1238
num_bytes: 775834402
num_examples: 2000
- name: FreeForm_1066
num_bytes: 773638453
num_examples: 2000
- name: FreeForm_1364
num_bytes: 773891208
num_examples: 2000
- name: FreeForm_1300
num_bytes: 777522788
num_examples: 2000
- name: FreeForm_1179
num_bytes: 779669212
num_examples: 2000
- name: FreeForm_1365
num_bytes: 776530326
num_examples: 2000
- name: FreeForm_1301
num_bytes: 779676562
num_examples: 2000
- name: FreeForm_1180
num_bytes: 775842626
num_examples: 2000
- name: FreeForm_1068
num_bytes: 778768145
num_examples: 2000
- name: FreeForm_1116
num_bytes: 781241772
num_examples: 2000
- name: FreeForm_1423
num_bytes: 781624549
num_examples: 2000
- name: FreeForm_1366
num_bytes: 774954357
num_examples: 2000
- name: FreeForm_1118
num_bytes: 773858637
num_examples: 2000
- name: FreeForm_1242
num_bytes: 769621466
num_examples: 2000
- name: FreeForm_1368
num_bytes: 780913717
num_examples: 2000
- name: FreeForm_1183
num_bytes: 767486681
num_examples: 2000
- name: FreeForm_1304
num_bytes: 780834799
num_examples: 2000
- name: FreeForm_1490
num_bytes: 780387151
num_examples: 2000
- name: FreeForm_1512
num_bytes: 778197016
num_examples: 2000
- name: FreeForm_1244
num_bytes: 772995330
num_examples: 2000
- name: FreeForm_1120
num_bytes: 779301535
num_examples: 2000
- name: FreeForm_1370
num_bytes: 776231720
num_examples: 2000
- name: FreeForm_1492
num_bytes: 773885264
num_examples: 2000
- name: FreeForm_1245
num_bytes: 779206640
num_examples: 2000
- name: FreeForm_1493
num_bytes: 773502241
num_examples: 2000
- name: FreeForm_1307
num_bytes: 771031781
num_examples: 2000
- name: FreeForm_1515
num_bytes: 778669871
num_examples: 2000
- name: FreeForm_1246
num_bytes: 780880343
num_examples: 2000
- name: FreeForm_1372
num_bytes: 770981961
num_examples: 2000
- name: FreeForm_1122
num_bytes: 778079182
num_examples: 2000
- name: FreeForm_1494
num_bytes: 776772801
num_examples: 2000
- name: FreeForm_1516
num_bytes: 773843230
num_examples: 2000
- name: FreeForm_1247
num_bytes: 770214115
num_examples: 2000
- name: FreeForm_1373
num_bytes: 787407590
num_examples: 2000
- name: FreeForm_1123
num_bytes: 779586645
num_examples: 2000
- name: FreeForm_1424
num_bytes: 781336954
num_examples: 2000
- name: FreeForm_1495
num_bytes: 777255582
num_examples: 2000
- name: FreeForm_1188
num_bytes: 786940051
num_examples: 2000
- name: FreeForm_1517
num_bytes: 774620951
num_examples: 2000
- name: FreeForm_1124
num_bytes: 776836685
num_examples: 2000
- name: FreeForm_1496
num_bytes: 781872763
num_examples: 2000
- name: FreeForm_1189
num_bytes: 771657509
num_examples: 2000
- name: FreeForm_1518
num_bytes: 773601547
num_examples: 2000
- name: FreeForm_1375
num_bytes: 779587165
num_examples: 2000
- name: FreeForm_1249
num_bytes: 773157176
num_examples: 2000
- name: FreeForm_1125
num_bytes: 775791033
num_examples: 2000
- name: FreeForm_1190
num_bytes: 777443084
num_examples: 2000
- name: FreeForm_1519
num_bytes: 780951682
num_examples: 2000
- name: FreeForm_1376
num_bytes: 777216870
num_examples: 2000
- name: FreeForm_1250
num_bytes: 775914126
num_examples: 2000
- name: FreeForm_1126
num_bytes: 781352076
num_examples: 2000
- name: FreeForm_1520
num_bytes: 775083183
num_examples: 2000
- name: FreeForm_1312
num_bytes: 778292149
num_examples: 2000
- name: FreeForm_1498
num_bytes: 774890612
num_examples: 2000
- name: FreeForm_1377
num_bytes: 785004845
num_examples: 2000
- name: FreeForm_1251
num_bytes: 789816754
num_examples: 2000
- name: FreeForm_1127
num_bytes: 770241132
num_examples: 2000
- name: FreeForm_1521
num_bytes: 776731607
num_examples: 2000
- name: FreeForm_1313
num_bytes: 778278211
num_examples: 2000
- name: FreeForm_1378
num_bytes: 771032430
num_examples: 2000
- name: FreeForm_1128
num_bytes: 777986250
num_examples: 2000
- name: FreeForm_1522
num_bytes: 771913901
num_examples: 2000
- name: FreeForm_1314
num_bytes: 785118185
num_examples: 2000
- name: FreeForm_1523
num_bytes: 771339035
num_examples: 2000
- name: FreeForm_1315
num_bytes: 781667460
num_examples: 2000
- name: FreeForm_1380
num_bytes: 773398852
num_examples: 2000
- name: FreeForm_1427
num_bytes: 772298723
num_examples: 2000
- name: FreeForm_1524
num_bytes: 768520469
num_examples: 2000
- name: FreeForm_1194
num_bytes: 782161236
num_examples: 2000
- name: FreeForm_1381
num_bytes: 773830458
num_examples: 2000
- name: FreeForm_1428
num_bytes: 771662432
num_examples: 2000
- name: FreeForm_1255
num_bytes: 768537036
num_examples: 2000
- name: FreeForm_1525
num_bytes: 778009921
num_examples: 2000
- name: FreeForm_1195
num_bytes: 777335139
num_examples: 2000
- name: FreeForm_1429
num_bytes: 764834149
num_examples: 2000
- name: FreeForm_1382
num_bytes: 775094191
num_examples: 2000
- name: FreeForm_1256
num_bytes: 773398652
num_examples: 2000
- name: FreeForm_1526
num_bytes: 770376404
num_examples: 2000
- name: FreeForm_1196
num_bytes: 778901116
num_examples: 2000
- name: FreeForm_1430
num_bytes: 771870799
num_examples: 2000
- name: FreeForm_1383
num_bytes: 775693605
num_examples: 2000
- name: FreeForm_1257
num_bytes: 767589408
num_examples: 2000
- name: FreeForm_1318
num_bytes: 780715386
num_examples: 2000
- name: FreeForm_1504
num_bytes: 779906843
num_examples: 2000
- name: FreeForm_1431
num_bytes: 776734403
num_examples: 2000
- name: FreeForm_1384
num_bytes: 774244033
num_examples: 2000
- name: FreeForm_1258
num_bytes: 776236989
num_examples: 2000
- name: FreeForm_1528
num_bytes: 778645804
num_examples: 2000
- name: FreeForm_1319
num_bytes: 774145055
num_examples: 2000
- name: FreeForm_1505
num_bytes: 775022647
num_examples: 2000
- name: FreeForm_1576
num_bytes: 777459214
num_examples: 2000
- name: FreeForm_1432
num_bytes: 773078854
num_examples: 2000
- name: FreeForm_1385
num_bytes: 770012790
num_examples: 2000
- name: FreeForm_1701
num_bytes: 771338275
num_examples: 2000
- name: FreeForm_1639
num_bytes: 776242518
num_examples: 2000
- name: FreeForm_1530
num_bytes: 774636910
num_examples: 2000
- name: FreeForm_1321
num_bytes: 772639127
num_examples: 2000
- name: FreeForm_1507
num_bytes: 774145767
num_examples: 2000
- name: FreeForm_1702
num_bytes: 769111676
num_examples: 2000
- name: FreeForm_1434
num_bytes: 776396590
num_examples: 2000
- name: FreeForm_1640
num_bytes: 774255527
num_examples: 2000
- name: FreeForm_1531
num_bytes: 769083709
num_examples: 2000
- name: FreeForm_1508
num_bytes: 775690083
num_examples: 2000
- name: FreeForm_1435
num_bytes: 768501130
num_examples: 2000
- name: FreeForm_1766
num_bytes: 772371623
num_examples: 2000
- name: FreeForm_1579
num_bytes: 771025814
num_examples: 2000
- name: FreeForm_1641
num_bytes: 779599332
num_examples: 2000
- name: FreeForm_1827
num_bytes: 775437486
num_examples: 2000
- name: FreeForm_1436
num_bytes: 770276884
num_examples: 2000
- name: FreeForm_1704
num_bytes: 775091117
num_examples: 2000
- name: FreeForm_1642
num_bytes: 776944029
num_examples: 2000
- name: FreeForm_1828
num_bytes: 778105987
num_examples: 2000
- name: FreeForm_1437
num_bytes: 778463269
num_examples: 2000
- name: FreeForm_1581
num_bytes: 781065185
num_examples: 2000
- name: FreeForm_1643
num_bytes: 776678831
num_examples: 2000
- name: FreeForm_1534
num_bytes: 776481583
num_examples: 2000
- name: FreeForm_1511
num_bytes: 774971010
num_examples: 2000
- name: FreeForm_1707
num_bytes: 763593691
num_examples: 2000
- name: FreeForm_1583
num_bytes: 770777355
num_examples: 2000
- name: FreeForm_1770
num_bytes: 777379608
num_examples: 2000
- name: FreeForm_1536
num_bytes: 781906336
num_examples: 2000
- name: FreeForm_1891
num_bytes: 783154996
num_examples: 2000
- name: FreeForm_1645
num_bytes: 779043465
num_examples: 2000
- name: FreeForm_1831
num_bytes: 779558675
num_examples: 2000
- name: FreeForm_1585
num_bytes: 774986574
num_examples: 2000
- name: FreeForm_1538
num_bytes: 771463098
num_examples: 2000
- name: FreeForm_1893
num_bytes: 775479546
num_examples: 2000
- name: FreeForm_1442
num_bytes: 772404804
num_examples: 2000
- name: FreeForm_1586
num_bytes: 781702151
num_examples: 2000
- name: FreeForm_1648
num_bytes: 773660147
num_examples: 2000
- name: FreeForm_1711
num_bytes: 780109753
num_examples: 2000
- name: FreeForm_1443
num_bytes: 766747197
num_examples: 2000
- name: FreeForm_1773
num_bytes: 774325226
num_examples: 2000
- name: FreeForm_1540
num_bytes: 770666305
num_examples: 2000
- name: FreeForm_1649
num_bytes: 776319711
num_examples: 2000
- name: FreeForm_1712
num_bytes: 770957101
num_examples: 2000
- name: FreeForm_1895
num_bytes: 770548607
num_examples: 2000
- name: FreeForm_1444
num_bytes: 784803015
num_examples: 2000
- name: FreeForm_1774
num_bytes: 773435164
num_examples: 2000
- name: FreeForm_1541
num_bytes: 773616113
num_examples: 2000
- name: FreeForm_1835
num_bytes: 780606549
num_examples: 2000
- name: FreeForm_1588
num_bytes: 775578246
num_examples: 2000
- name: FreeForm_1445
num_bytes: 778076077
num_examples: 2000
- name: FreeForm_1896
num_bytes: 771418372
num_examples: 2000
- name: FreeForm_1542
num_bytes: 780867652
num_examples: 2000
- name: FreeForm_1775
num_bytes: 770595969
num_examples: 2000
- name: FreeForm_1589
num_bytes: 770576399
num_examples: 2000
- name: FreeForm_1714
num_bytes: 772460649
num_examples: 2000
- name: FreeForm_1897
num_bytes: 774325510
num_examples: 2000
- name: FreeForm_1543
num_bytes: 777027575
num_examples: 2000
- name: FreeForm_1590
num_bytes: 779089115
num_examples: 2000
- name: FreeForm_1715
num_bytes: 783861822
num_examples: 2000
- name: FreeForm_1447
num_bytes: 775405219
num_examples: 2000
- name: FreeForm_1591
num_bytes: 769975593
num_examples: 2000
- name: FreeForm_1544
num_bytes: 778777533
num_examples: 2000
- name: FreeForm_1838
num_bytes: 775828792
num_examples: 2000
- name: FreeForm_1716
num_bytes: 774101550
num_examples: 2000
- name: FreeForm_1448
num_bytes: 772238327
num_examples: 2000
- name: FreeForm_1545
num_bytes: 770967701
num_examples: 2000
- name: FreeForm_1592
num_bytes: 777424108
num_examples: 2000
- name: FreeForm_1717
num_bytes: 774522898
num_examples: 2000
- name: FreeForm_1953
num_bytes: 771799236
num_examples: 2000
- name: FreeForm_1900
num_bytes: 780148702
num_examples: 2000
- name: FreeForm_1779
num_bytes: 776738221
num_examples: 2000
- name: FreeForm_1954
num_bytes: 774180999
num_examples: 2000
- name: FreeForm_1901
num_bytes: 780619673
num_examples: 2000
- name: FreeForm_1594
num_bytes: 777472801
num_examples: 2000
- name: FreeForm_1719
num_bytes: 777326991
num_examples: 2000
- name: FreeForm_1841
num_bytes: 771308279
num_examples: 2000
- name: FreeForm_1548
num_bytes: 770163212
num_examples: 2000
- name: FreeForm_1595
num_bytes: 772170521
num_examples: 2000
- name: FreeForm_1720
num_bytes: 772493860
num_examples: 2000
- name: FreeForm_1842
num_bytes: 771592650
num_examples: 2000
- name: FreeForm_1656
num_bytes: 771999855
num_examples: 2000
- name: FreeForm_1781
num_bytes: 777125987
num_examples: 2000
- name: FreeForm_1721
num_bytes: 776375890
num_examples: 2000
- name: FreeForm_1657
num_bytes: 778104922
num_examples: 2000
- name: FreeForm_1782
num_bytes: 779534066
num_examples: 2000
- name: FreeForm_1904
num_bytes: 765267839
num_examples: 2000
- name: FreeForm_1597
num_bytes: 769496067
num_examples: 2000
- name: FreeForm_1844
num_bytes: 767079297
num_examples: 2000
- name: FreeForm_1957
num_bytes: 775659155
num_examples: 2000
- name: FreeForm_1551
num_bytes: 782053459
num_examples: 2000
- name: FreeForm_1905
num_bytes: 770097688
num_examples: 2000
- name: FreeForm_1598
num_bytes: 773060032
num_examples: 2000
- name: FreeForm_1723
num_bytes: 776571367
num_examples: 2000
- name: FreeForm_1659
num_bytes: 767291404
num_examples: 2000
- name: FreeForm_1552
num_bytes: 774111834
num_examples: 2000
- name: FreeForm_1784
num_bytes: 767427750
num_examples: 2000
- name: FreeForm_1599
num_bytes: 777344888
num_examples: 2000
- name: FreeForm_1724
num_bytes: 777742400
num_examples: 2000
- name: FreeForm_1660
num_bytes: 774378651
num_examples: 2000
- name: FreeForm_1725
num_bytes: 787134242
num_examples: 2000
- name: FreeForm_1960
num_bytes: 771486600
num_examples: 2000
- name: FreeForm_1661
num_bytes: 783677147
num_examples: 2000
- name: FreeForm_1554
num_bytes: 780725222
num_examples: 2000
- name: FreeForm_1847
num_bytes: 778510803
num_examples: 2000
- name: FreeForm_1726
num_bytes: 776823901
num_examples: 2000
- name: FreeForm_1601
num_bytes: 775123180
num_examples: 2000
- name: FreeForm_1908
num_bytes: 776216634
num_examples: 2000
- name: FreeForm_1662
num_bytes: 775888677
num_examples: 2000
- name: FreeForm_1848
num_bytes: 784339905
num_examples: 2000
- name: FreeForm_1602
num_bytes: 772905006
num_examples: 2000
- name: FreeForm_1909
num_bytes: 771662853
num_examples: 2000
- name: FreeForm_1603
num_bytes: 772030313
num_examples: 2000
- name: FreeForm_1910
num_bytes: 769654437
num_examples: 2000
- name: FreeForm_1557
num_bytes: 776514469
num_examples: 2000
- name: FreeForm_1604
num_bytes: 779429331
num_examples: 2000
- name: FreeForm_1789
num_bytes: 773726710
num_examples: 2000
- name: FreeForm_1558
num_bytes: 776427709
num_examples: 2000
- name: FreeForm_1665
num_bytes: 767990537
num_examples: 2000
- name: FreeForm_1605
num_bytes: 774426474
num_examples: 2000
- name: FreeForm_1852
num_bytes: 769143639
num_examples: 2000
- name: FreeForm_1791
num_bytes: 767586822
num_examples: 2000
- name: FreeForm_1667
num_bytes: 772290052
num_examples: 2000
- name: FreeForm_1607
num_bytes: 768456885
num_examples: 2000
- name: FreeForm_1913
num_bytes: 779963651
num_examples: 2000
- name: FreeForm_1732
num_bytes: 772897019
num_examples: 2000
- name: FreeForm_1669
num_bytes: 776027758
num_examples: 2000
- name: FreeForm_1609
num_bytes: 768567004
num_examples: 2000
- name: FreeForm_1562
num_bytes: 769935418
num_examples: 2000
- name: FreeForm_1915
num_bytes: 782856606
num_examples: 2000
- name: FreeForm_1968
num_bytes: 767376995
num_examples: 2000
- name: FreeForm_1734
num_bytes: 769087259
num_examples: 2000
- name: FreeForm_1855
num_bytes: 779535816
num_examples: 2000
- name: FreeForm_1670
num_bytes: 781332277
num_examples: 2000
- name: FreeForm_1610
num_bytes: 781231841
num_examples: 2000
- name: FreeForm_1969
num_bytes: 777875017
num_examples: 2000
- name: FreeForm_1795
num_bytes: 775452519
num_examples: 2000
- name: FreeForm_1671
num_bytes: 777366861
num_examples: 2000
- name: FreeForm_1611
num_bytes: 784641102
num_examples: 2000
- name: FreeForm_1917
num_bytes: 777599611
num_examples: 2000
- name: FreeForm_1564
num_bytes: 780590282
num_examples: 2000
- name: FreeForm_1970
num_bytes: 773274829
num_examples: 2000
- name: FreeForm_1796
num_bytes: 782533872
num_examples: 2000
- name: FreeForm_1857
num_bytes: 780690564
num_examples: 2000
- name: FreeForm_1672
num_bytes: 768657526
num_examples: 2000
- name: FreeForm_1565
num_bytes: 768593353
num_examples: 2000
- name: FreeForm_1971
num_bytes: 770849547
num_examples: 2000
- name: FreeForm_1673
num_bytes: 773737499
num_examples: 2000
- name: FreeForm_1797
num_bytes: 783757126
num_examples: 2000
- name: FreeForm_1972
num_bytes: 772193432
num_examples: 2000
- name: FreeForm_1566
num_bytes: 782382857
num_examples: 2000
- name: FreeForm_1674
num_bytes: 776755282
num_examples: 2000
- name: FreeForm_1859
num_bytes: 775406752
num_examples: 2000
- name: FreeForm_1738
num_bytes: 768406452
num_examples: 2000
- name: FreeForm_1567
num_bytes: 776284767
num_examples: 2000
- name: FreeForm_1799
num_bytes: 779221193
num_examples: 2000
- name: FreeForm_1614
num_bytes: 774084638
num_examples: 2000
- name: FreeForm_1860
num_bytes: 779270331
num_examples: 2000
- name: FreeForm_1568
num_bytes: 778648659
num_examples: 2000
- name: FreeForm_1740
num_bytes: 773598842
num_examples: 2000
- name: FreeForm_1676
num_bytes: 779241237
num_examples: 2000
- name: FreeForm_1974
num_bytes: 777030113
num_examples: 2000
- name: FreeForm_1741
num_bytes: 778885616
num_examples: 2000
- name: FreeForm_1923
num_bytes: 769765231
num_examples: 2000
- name: FreeForm_1742
num_bytes: 778556450
num_examples: 2000
- name: FreeForm_1617
num_bytes: 775776789
num_examples: 2000
- name: FreeForm_1924
num_bytes: 774657873
num_examples: 2000
- name: FreeForm_1743
num_bytes: 769957345
num_examples: 2000
- name: FreeForm_1803
num_bytes: 779399830
num_examples: 2000
- name: FreeForm_1679
num_bytes: 770562122
num_examples: 2000
- name: FreeForm_1864
num_bytes: 775414698
num_examples: 2000
- name: FreeForm_1744
num_bytes: 772432481
num_examples: 2000
- name: FreeForm_1804
num_bytes: 769489846
num_examples: 2000
- name: FreeForm_1865
num_bytes: 772874771
num_examples: 2000
- name: FreeForm_1978
num_bytes: 770923318
num_examples: 2000
- name: FreeForm_1745
num_bytes: 775570130
num_examples: 2000
- name: FreeForm_1573
num_bytes: 778101981
num_examples: 2000
- name: FreeForm_1805
num_bytes: 773192041
num_examples: 2000
- name: FreeForm_1620
num_bytes: 770438186
num_examples: 2000
- name: FreeForm_1681
num_bytes: 773269627
num_examples: 2000
- name: FreeForm_1927
num_bytes: 777793544
num_examples: 2000
- name: FreeForm_1979
num_bytes: 772277123
num_examples: 2000
- name: FreeForm_1746
num_bytes: 768024663
num_examples: 2000
- name: FreeForm_1574
num_bytes: 775182043
num_examples: 2000
- name: FreeForm_1867
num_bytes: 772336683
num_examples: 2000
- name: FreeForm_1621
num_bytes: 779643601
num_examples: 2000
- name: FreeForm_1806
num_bytes: 772147940
num_examples: 2000
- name: FreeForm_1747
num_bytes: 782069613
num_examples: 2000
- name: FreeForm_1868
num_bytes: 766212112
num_examples: 2000
- name: FreeForm_1807
num_bytes: 776026001
num_examples: 2000
- name: FreeForm_1683
num_bytes: 772923845
num_examples: 2000
- name: FreeForm_1748
num_bytes: 770643722
num_examples: 2000
- name: FreeForm_1623
num_bytes: 781995507
num_examples: 2000
- name: FreeForm_1749
num_bytes: 773868228
num_examples: 2000
- name: FreeForm_1870
num_bytes: 779144486
num_examples: 2000
- name: FreeForm_1624
num_bytes: 772465705
num_examples: 2000
- name: FreeForm_1809
num_bytes: 770882826
num_examples: 2000
- name: FreeForm_1750
num_bytes: 768457543
num_examples: 2000
- name: FreeForm_1931
num_bytes: 772448872
num_examples: 2000
- name: FreeForm_1983
num_bytes: 767368466
num_examples: 2000
- name: FreeForm_1625
num_bytes: 779336106
num_examples: 2000
- name: FreeForm_1871
num_bytes: 773989099
num_examples: 2000
- name: FreeForm_1810
num_bytes: 781846996
num_examples: 2000
- name: FreeForm_1751
num_bytes: 770607707
num_examples: 2000
- name: FreeForm_1932
num_bytes: 775846499
num_examples: 2000
- name: FreeForm_1686
num_bytes: 775900812
num_examples: 2000
- name: FreeForm_1811
num_bytes: 774726677
num_examples: 2000
- name: FreeForm_1872
num_bytes: 776443102
num_examples: 2000
- name: FreeForm_1687
num_bytes: 773365850
num_examples: 2000
- name: FreeForm_1627
num_bytes: 775013436
num_examples: 2000
- name: FreeForm_1812
num_bytes: 774970479
num_examples: 2000
- name: FreeForm_1688
num_bytes: 777417292
num_examples: 2000
- name: FreeForm_1628
num_bytes: 771889019
num_examples: 2000
- name: FreeForm_1986
num_bytes: 777492292
num_examples: 2000
- name: FreeForm_1813
num_bytes: 775689254
num_examples: 2000
- name: FreeForm_1630
num_bytes: 763103601
num_examples: 2000
- name: FreeForm_1690
num_bytes: 771372106
num_examples: 2000
- name: FreeForm_1988
num_bytes: 772915325
num_examples: 2000
- name: FreeForm_1876
num_bytes: 771998762
num_examples: 2000
- name: FreeForm_1756
num_bytes: 777770864
num_examples: 2000
- name: FreeForm_1691
num_bytes: 774314799
num_examples: 2000
- name: FreeForm_1937
num_bytes: 777366277
num_examples: 2000
- name: FreeForm_1631
num_bytes: 771345279
num_examples: 2000
- name: FreeForm_1878
num_bytes: 767875789
num_examples: 2000
- name: FreeForm_1817
num_bytes: 768709391
num_examples: 2000
- name: FreeForm_1633
num_bytes: 771233969
num_examples: 2000
- name: FreeForm_1991
num_bytes: 769596136
num_examples: 2000
- name: FreeForm_1694
num_bytes: 772171191
num_examples: 2000
- name: FreeForm_1634
num_bytes: 769627140
num_examples: 2000
- name: FreeForm_1940
num_bytes: 776593617
num_examples: 2000
- name: FreeForm_1992
num_bytes: 777116071
num_examples: 2000
- name: FreeForm_1695
num_bytes: 775752244
num_examples: 2000
- name: FreeForm_1635
num_bytes: 775899627
num_examples: 2000
- name: FreeForm_1880
num_bytes: 776396050
num_examples: 2000
- name: FreeForm_1760
num_bytes: 768289077
num_examples: 2000
- name: FreeForm_1696
num_bytes: 784599423
num_examples: 2000
- name: FreeForm_1820
num_bytes: 775526982
num_examples: 2000
- name: FreeForm_1636
num_bytes: 779188921
num_examples: 2000
- name: FreeForm_1881
num_bytes: 768184329
num_examples: 2000
- name: FreeForm_1761
num_bytes: 771237846
num_examples: 2000
- name: FreeForm_1942
num_bytes: 774592400
num_examples: 2000
- name: FreeForm_1697
num_bytes: 777361676
num_examples: 2000
- name: FreeForm_1637
num_bytes: 775511943
num_examples: 2000
- name: FreeForm_1882
num_bytes: 773007481
num_examples: 2000
- name: FreeForm_1943
num_bytes: 776785506
num_examples: 2000
- name: FreeForm_1762
num_bytes: 770796170
num_examples: 2000
- name: FreeForm_1995
num_bytes: 774343622
num_examples: 2000
- name: FreeForm_1883
num_bytes: 773607987
num_examples: 2000
- name: FreeForm_1698
num_bytes: 778047450
num_examples: 2000
- name: FreeForm_1822
num_bytes: 778444354
num_examples: 2000
- name: FreeForm_1944
num_bytes: 769459278
num_examples: 2000
- name: FreeForm_1884
num_bytes: 772799351
num_examples: 2000
- name: FreeForm_1823
num_bytes: 776495132
num_examples: 2000
- name: FreeForm_1945
num_bytes: 775081306
num_examples: 2000
- name: FreeForm_1885
num_bytes: 771521453
num_examples: 2000
- name: FreeForm_1700
num_bytes: 765143515
num_examples: 2000
- name: FreeForm_1946
num_bytes: 776201196
num_examples: 2000
- name: FreeForm_1886
num_bytes: 772053340
num_examples: 2000
- name: FreeForm_1825
num_bytes: 773203747
num_examples: 2000
- name: FreeForm_1947
num_bytes: 771770136
num_examples: 2000
- name: FreeForm_1887
num_bytes: 779615516
num_examples: 2000
- name: FreeForm_1826
num_bytes: 773148215
num_examples: 2000
- name: FreeForm_1948
num_bytes: 772645007
num_examples: 2000
- name: FreeForm_1888
num_bytes: 772856693
num_examples: 2000
- name: FreeForm_1999
num_bytes: 769374754
num_examples: 2000
- name: FreeForm_1949
num_bytes: 773280379
num_examples: 2000
- name: FreeForm_1889
num_bytes: 774735177
num_examples: 2000
- name: FreeForm_1950
num_bytes: 774599150
num_examples: 2000
- name: FreeForm_1951
num_bytes: 767662993
num_examples: 2000
- name: FreeForm_1952
num_bytes: 764039694
num_examples: 2000
- name: FreeForm_538
num_bytes: 789922342
num_examples: 2000
- name: FreeForm_965
num_bytes: 782703569
num_examples: 2000
- name: FreeForm_539
num_bytes: 781175362
num_examples: 2000
- name: FreeForm_903
num_bytes: 777441158
num_examples: 2000
- name: FreeForm_540
num_bytes: 782021717
num_examples: 2000
- name: FreeForm_917
num_bytes: 781067199
num_examples: 2000
- name: FreeForm_541
num_bytes: 775971262
num_examples: 2000
- name: FreeForm_604
num_bytes: 785217033
num_examples: 2000
- name: FreeForm_818
num_bytes: 779756338
num_examples: 2000
- name: FreeForm_728
num_bytes: 776195434
num_examples: 2000
- name: FreeForm_606
num_bytes: 778882561
num_examples: 2000
- name: FreeForm_997
num_bytes: 784575711
num_examples: 2000
- name: FreeForm_562
num_bytes: 776825755
num_examples: 2000
- name: FreeForm_623
num_bytes: 783935630
num_examples: 2000
- name: FreeForm_1021
num_bytes: 774340124
num_examples: 2000
- name: FreeForm_731
num_bytes: 781291514
num_examples: 2000
- name: FreeForm_940
num_bytes: 785912855
num_examples: 2000
- name: FreeForm_732
num_bytes: 779065415
num_examples: 2000
- name: FreeForm_878
num_bytes: 775573675
num_examples: 2000
- name: FreeForm_1067
num_bytes: 779476433
num_examples: 2000
- name: FreeForm_669
num_bytes: 783825944
num_examples: 2000
- name: FreeForm_879
num_bytes: 781175453
num_examples: 2000
- name: FreeForm_1162
num_bytes: 775534366
num_examples: 2000
- name: FreeForm_1099
num_bytes: 776744419
num_examples: 2000
- name: FreeForm_670
num_bytes: 782818795
num_examples: 2000
- name: FreeForm_1172
num_bytes: 772800488
num_examples: 2000
- name: FreeForm_1222
num_bytes: 768753542
num_examples: 2000
- name: FreeForm_686
num_bytes: 779647058
num_examples: 2000
- name: FreeForm_1337
num_bytes: 777645742
num_examples: 2000
- name: FreeForm_688
num_bytes: 783226366
num_examples: 2000
- name: FreeForm_1115
num_bytes: 777750807
num_examples: 2000
- name: FreeForm_1265
num_bytes: 782280644
num_examples: 2000
- name: FreeForm_1117
num_bytes: 771938043
num_examples: 2000
- name: FreeForm_1418
num_bytes: 773562141
num_examples: 2000
- name: FreeForm_1513
num_bytes: 772269953
num_examples: 2000
- name: FreeForm_1360
num_bytes: 770456201
num_examples: 2000
- name: FreeForm_1422
num_bytes: 766260039
num_examples: 2000
- name: FreeForm_1514
num_bytes: 778588888
num_examples: 2000
- name: FreeForm_1290
num_bytes: 776704724
num_examples: 2000
- name: FreeForm_1487
num_bytes: 771203540
num_examples: 2000
- name: FreeForm_1527
num_bytes: 776428854
num_examples: 2000
- name: FreeForm_1299
num_bytes: 774592302
num_examples: 2000
- name: FreeForm_1488
num_bytes: 772030662
num_examples: 2000
- name: FreeForm_1529
num_bytes: 769107675
num_examples: 2000
- name: FreeForm_1302
num_bytes: 783287330
num_examples: 2000
- name: FreeForm_1371
num_bytes: 778291875
num_examples: 2000
- name: FreeForm_1439
num_bytes: 775125426
num_examples: 2000
- name: FreeForm_1638
num_bytes: 770945774
num_examples: 2000
- name: FreeForm_1305
num_bytes: 774733211
num_examples: 2000
- name: FreeForm_1644
num_bytes: 763865811
num_examples: 2000
- name: FreeForm_1308
num_bytes: 770073632
num_examples: 2000
- name: FreeForm_1497
num_bytes: 774371998
num_examples: 2000
- name: FreeForm_1706
num_bytes: 767965922
num_examples: 2000
- name: FreeForm_1830
num_bytes: 777364204
num_examples: 2000
- name: FreeForm_1650
num_bytes: 774946127
num_examples: 2000
- name: FreeForm_1537
num_bytes: 770611835
num_examples: 2000
- name: FreeForm_1832
num_bytes: 769485028
num_examples: 2000
- name: FreeForm_1776
num_bytes: 779900472
num_examples: 2000
- name: FreeForm_1322
num_bytes: 778172819
num_examples: 2000
- name: FreeForm_1833
num_bytes: 768188642
num_examples: 2000
- name: FreeForm_1713
num_bytes: 772172320
num_examples: 2000
- name: FreeForm_1553
num_bytes: 774246555
num_examples: 2000
- name: FreeForm_1596
num_bytes: 775757405
num_examples: 2000
- name: FreeForm_1663
num_bytes: 777946907
num_examples: 2000
- name: FreeForm_1556
num_bytes: 770487590
num_examples: 2000
- name: FreeForm_1783
num_bytes: 774307481
num_examples: 2000
- name: FreeForm_1912
num_bytes: 774185583
num_examples: 2000
- name: FreeForm_1559
num_bytes: 774629139
num_examples: 2000
- name: FreeForm_1785
num_bytes: 776955190
num_examples: 2000
- name: FreeForm_1666
num_bytes: 767827026
num_examples: 2000
- name: FreeForm_1729
num_bytes: 780695121
num_examples: 2000
- name: FreeForm_1788
num_bytes: 766180430
num_examples: 2000
- name: FreeForm_1668
num_bytes: 769715133
num_examples: 2000
- name: FreeForm_1918
num_bytes: 774617311
num_examples: 2000
- name: FreeForm_1563
num_bytes: 774817952
num_examples: 2000
- name: FreeForm_1675
num_bytes: 773030944
num_examples: 2000
- name: FreeForm_1962
num_bytes: 786053209
num_examples: 2000
- name: FreeForm_1792
num_bytes: 774700008
num_examples: 2000
- name: FreeForm_1615
num_bytes: 774380131
num_examples: 2000
- name: FreeForm_1846
num_bytes: 774658032
num_examples: 2000
- name: FreeForm_1616
num_bytes: 782429195
num_examples: 2000
- name: FreeForm_1850
num_bytes: 775140091
num_examples: 2000
- name: FreeForm_1964
num_bytes: 780393901
num_examples: 2000
- name: FreeForm_1801
num_bytes: 768773753
num_examples: 2000
- name: FreeForm_1851
num_bytes: 775091817
num_examples: 2000
- name: FreeForm_1965
num_bytes: 774710107
num_examples: 2000
- name: FreeForm_1626
num_bytes: 776500055
num_examples: 2000
- name: FreeForm_1853
num_bytes: 774376334
num_examples: 2000
- name: FreeForm_1967
num_bytes: 767462102
num_examples: 2000
- name: FreeForm_1692
num_bytes: 766343506
num_examples: 2000
- name: FreeForm_1854
num_bytes: 768674186
num_examples: 2000
- name: FreeForm_1975
num_bytes: 765777279
num_examples: 2000
- name: FreeForm_1699
num_bytes: 778883501
num_examples: 2000
- name: FreeForm_1755
num_bytes: 783000185
num_examples: 2000
- name: FreeForm_1757
num_bytes: 769193034
num_examples: 2000
- name: FreeForm_1763
num_bytes: 772044823
num_examples: 2000
- name: FreeForm_1814
num_bytes: 777568635
num_examples: 2000
- name: FreeForm_1816
num_bytes: 776191715
num_examples: 2000
- name: FreeForm_1821
num_bytes: 777857890
num_examples: 2000
- name: FreeForm_1856
num_bytes: 769967566
num_examples: 2000
- name: FreeForm_1862
num_bytes: 767341817
num_examples: 2000
- name: FreeForm_1873
num_bytes: 772574070
num_examples: 2000
- name: FreeForm_1875
num_bytes: 770945433
num_examples: 2000
- name: FreeForm_1877
num_bytes: 772618224
num_examples: 2000
- name: FreeForm_1935
num_bytes: 780171644
num_examples: 2000
- name: FreeForm_1936
num_bytes: 780368989
num_examples: 2000
- name: FreeForm_1938
num_bytes: 775192638
num_examples: 2000
- name: FreeForm_1939
num_bytes: 768517191
num_examples: 2000
- name: FreeForm_1941
num_bytes: 767928606
num_examples: 2000
- name: FreeForm_1977
num_bytes: 780736929
num_examples: 2000
- name: FreeForm_1981
num_bytes: 775615890
num_examples: 2000
- name: FreeForm_1984
num_bytes: 769609649
num_examples: 2000
- name: FreeForm_1985
num_bytes: 770730441
num_examples: 2000
- name: FreeForm_1987
num_bytes: 768263066
num_examples: 2000
- name: FreeForm_1989
num_bytes: 780388977
num_examples: 2000
- name: FreeForm_1990
num_bytes: 772863509
num_examples: 2000
- name: FreeForm_1993
num_bytes: 773757340
num_examples: 2000
- name: FreeForm_1996
num_bytes: 770872885
num_examples: 2000
- name: FreeForm_2000
num_bytes: 32585530
num_examples: 83
- name: FreeForm_1205
num_bytes: 776134960.0
num_examples: 2000
download_size: 1182151585538
dataset_size: 1177371972678.0
configs:
- config_name: default
data_files:
- split: FreeForm_0
path: data/FreeForm_0-*
- split: FreeForm_1
path: data/FreeForm_1-*
- split: FreeForm_2
path: data/FreeForm_2-*
- split: FreeForm_3
path: data/FreeForm_3-*
- split: FreeForm_4
path: data/FreeForm_4-*
- split: FreeForm_5
path: data/FreeForm_5-*
- split: FreeForm_6
path: data/FreeForm_6-*
- split: FreeForm_7
path: data/FreeForm_7-*
- split: FreeForm_8
path: data/FreeForm_8-*
- split: FreeForm_9
path: data/FreeForm_9-*
- split: FreeForm_10
path: data/FreeForm_10-*
- split: FreeForm_11
path: data/FreeForm_11-*
- split: FreeForm_12
path: data/FreeForm_12-*
- split: FreeForm_13
path: data/FreeForm_13-*
- split: FreeForm_14
path: data/FreeForm_14-*
- split: FreeForm_15
path: data/FreeForm_15-*
- split: FreeForm_16
path: data/FreeForm_16-*
- split: FreeForm_17
path: data/FreeForm_17-*
- split: FreeForm_18
path: data/FreeForm_18-*
- split: FreeForm_19
path: data/FreeForm_19-*
- split: FreeForm_20
path: data/FreeForm_20-*
- split: FreeForm_21
path: data/FreeForm_21-*
- split: FreeForm_22
path: data/FreeForm_22-*
- split: FreeForm_23
path: data/FreeForm_23-*
- split: FreeForm_24
path: data/FreeForm_24-*
- split: FreeForm_25
path: data/FreeForm_25-*
- split: FreeForm_26
path: data/FreeForm_26-*
- split: FreeForm_27
path: data/FreeForm_27-*
- split: FreeForm_28
path: data/FreeForm_28-*
- split: FreeForm_29
path: data/FreeForm_29-*
- split: FreeForm_30
path: data/FreeForm_30-*
- split: FreeForm_31
path: data/FreeForm_31-*
- split: FreeForm_32
path: data/FreeForm_32-*
- split: FreeForm_33
path: data/FreeForm_33-*
- split: FreeForm_34
path: data/FreeForm_34-*
- split: FreeForm_35
path: data/FreeForm_35-*
- split: FreeForm_36
path: data/FreeForm_36-*
- split: FreeForm_37
path: data/FreeForm_37-*
- split: FreeForm_38
path: data/FreeForm_38-*
- split: FreeForm_39
path: data/FreeForm_39-*
- split: FreeForm_40
path: data/FreeForm_40-*
- split: FreeForm_41
path: data/FreeForm_41-*
- split: FreeForm_42
path: data/FreeForm_42-*
- split: FreeForm_43
path: data/FreeForm_43-*
- split: FreeForm_44
path: data/FreeForm_44-*
- split: FreeForm_45
path: data/FreeForm_45-*
- split: FreeForm_46
path: data/FreeForm_46-*
- split: FreeForm_47
path: data/FreeForm_47-*
- split: FreeForm_48
path: data/FreeForm_48-*
- split: FreeForm_49
path: data/FreeForm_49-*
- split: FreeForm_50
path: data/FreeForm_50-*
- split: FreeForm_51
path: data/FreeForm_51-*
- split: FreeForm_52
path: data/FreeForm_52-*
- split: FreeForm_53
path: data/FreeForm_53-*
- split: FreeForm_54
path: data/FreeForm_54-*
- split: FreeForm_55
path: data/FreeForm_55-*
- split: FreeForm_56
path: data/FreeForm_56-*
- split: FreeForm_57
path: data/FreeForm_57-*
- split: FreeForm_58
path: data/FreeForm_58-*
- split: FreeForm_59
path: data/FreeForm_59-*
- split: FreeForm_60
path: data/FreeForm_60-*
- split: FreeForm_61
path: data/FreeForm_61-*
- split: FreeForm_62
path: data/FreeForm_62-*
- split: FreeForm_63
path: data/FreeForm_63-*
- split: FreeForm_64
path: data/FreeForm_64-*
- split: FreeForm_65
path: data/FreeForm_65-*
- split: FreeForm_66
path: data/FreeForm_66-*
- split: FreeForm_67
path: data/FreeForm_67-*
- split: FreeForm_68
path: data/FreeForm_68-*
- split: FreeForm_69
path: data/FreeForm_69-*
- split: FreeForm_70
path: data/FreeForm_70-*
- split: FreeForm_71
path: data/FreeForm_71-*
- split: FreeForm_72
path: data/FreeForm_72-*
- split: FreeForm_73
path: data/FreeForm_73-*
- split: FreeForm_74
path: data/FreeForm_74-*
- split: FreeForm_75
path: data/FreeForm_75-*
- split: FreeForm_76
path: data/FreeForm_76-*
- split: FreeForm_77
path: data/FreeForm_77-*
- split: FreeForm_78
path: data/FreeForm_78-*
- split: FreeForm_79
path: data/FreeForm_79-*
- split: FreeForm_80
path: data/FreeForm_80-*
- split: FreeForm_81
path: data/FreeForm_81-*
- split: FreeForm_82
path: data/FreeForm_82-*
- split: FreeForm_83
path: data/FreeForm_83-*
- split: FreeForm_84
path: data/FreeForm_84-*
- split: FreeForm_85
path: data/FreeForm_85-*
- split: FreeForm_86
path: data/FreeForm_86-*
- split: FreeForm_87
path: data/FreeForm_87-*
- split: FreeForm_88
path: data/FreeForm_88-*
- split: FreeForm_89
path: data/FreeForm_89-*
- split: FreeForm_90
path: data/FreeForm_90-*
- split: FreeForm_91
path: data/FreeForm_91-*
- split: FreeForm_92
path: data/FreeForm_92-*
- split: FreeForm_93
path: data/FreeForm_93-*
- split: FreeForm_94
path: data/FreeForm_94-*
- split: FreeForm_95
path: data/FreeForm_95-*
- split: FreeForm_96
path: data/FreeForm_96-*
- split: FreeForm_97
path: data/FreeForm_97-*
- split: FreeForm_98
path: data/FreeForm_98-*
- split: FreeForm_99
path: data/FreeForm_99-*
- split: FreeForm_100
path: data/FreeForm_100-*
- split: FreeForm_101
path: data/FreeForm_101-*
- split: FreeForm_102
path: data/FreeForm_102-*
- split: FreeForm_103
path: data/FreeForm_103-*
- split: FreeForm_104
path: data/FreeForm_104-*
- split: FreeForm_105
path: data/FreeForm_105-*
- split: FreeForm_106
path: data/FreeForm_106-*
- split: FreeForm_107
path: data/FreeForm_107-*
- split: FreeForm_108
path: data/FreeForm_108-*
- split: FreeForm_109
path: data/FreeForm_109-*
- split: FreeForm_110
path: data/FreeForm_110-*
- split: FreeForm_111
path: data/FreeForm_111-*
- split: FreeForm_112
path: data/FreeForm_112-*
- split: FreeForm_113
path: data/FreeForm_113-*
- split: FreeForm_114
path: data/FreeForm_114-*
- split: FreeForm_115
path: data/FreeForm_115-*
- split: FreeForm_116
path: data/FreeForm_116-*
- split: FreeForm_117
path: data/FreeForm_117-*
- split: FreeForm_118
path: data/FreeForm_118-*
- split: FreeForm_119
path: data/FreeForm_119-*
- split: FreeForm_120
path: data/FreeForm_120-*
- split: FreeForm_121
path: data/FreeForm_121-*
- split: FreeForm_122
path: data/FreeForm_122-*
- split: FreeForm_123
path: data/FreeForm_123-*
- split: FreeForm_124
path: data/FreeForm_124-*
- split: FreeForm_125
path: data/FreeForm_125-*
- split: FreeForm_126
path: data/FreeForm_126-*
- split: FreeForm_127
path: data/FreeForm_127-*
- split: FreeForm_128
path: data/FreeForm_128-*
- split: FreeForm_129
path: data/FreeForm_129-*
- split: FreeForm_130
path: data/FreeForm_130-*
- split: FreeForm_131
path: data/FreeForm_131-*
- split: FreeForm_132
path: data/FreeForm_132-*
- split: FreeForm_133
path: data/FreeForm_133-*
- split: FreeForm_134
path: data/FreeForm_134-*
- split: FreeForm_135
path: data/FreeForm_135-*
- split: FreeForm_136
path: data/FreeForm_136-*
- split: FreeForm_137
path: data/FreeForm_137-*
- split: FreeForm_138
path: data/FreeForm_138-*
- split: FreeForm_139
path: data/FreeForm_139-*
- split: FreeForm_140
path: data/FreeForm_140-*
- split: FreeForm_141
path: data/FreeForm_141-*
- split: FreeForm_142
path: data/FreeForm_142-*
- split: FreeForm_143
path: data/FreeForm_143-*
- split: FreeForm_144
path: data/FreeForm_144-*
- split: FreeForm_145
path: data/FreeForm_145-*
- split: FreeForm_146
path: data/FreeForm_146-*
- split: FreeForm_147
path: data/FreeForm_147-*
- split: FreeForm_148
path: data/FreeForm_148-*
- split: FreeForm_149
path: data/FreeForm_149-*
- split: FreeForm_150
path: data/FreeForm_150-*
- split: FreeForm_151
path: data/FreeForm_151-*
- split: FreeForm_152
path: data/FreeForm_152-*
- split: FreeForm_153
path: data/FreeForm_153-*
- split: FreeForm_154
path: data/FreeForm_154-*
- split: FreeForm_155
path: data/FreeForm_155-*
- split: FreeForm_156
path: data/FreeForm_156-*
- split: FreeForm_157
path: data/FreeForm_157-*
- split: FreeForm_158
path: data/FreeForm_158-*
- split: FreeForm_159
path: data/FreeForm_159-*
- split: FreeForm_160
path: data/FreeForm_160-*
- split: FreeForm_161
path: data/FreeForm_161-*
- split: FreeForm_162
path: data/FreeForm_162-*
- split: FreeForm_163
path: data/FreeForm_163-*
- split: FreeForm_164
path: data/FreeForm_164-*
- split: FreeForm_165
path: data/FreeForm_165-*
- split: FreeForm_166
path: data/FreeForm_166-*
- split: FreeForm_167
path: data/FreeForm_167-*
- split: FreeForm_168
path: data/FreeForm_168-*
- split: FreeForm_169
path: data/FreeForm_169-*
- split: FreeForm_170
path: data/FreeForm_170-*
- split: FreeForm_171
path: data/FreeForm_171-*
- split: FreeForm_172
path: data/FreeForm_172-*
- split: FreeForm_173
path: data/FreeForm_173-*
- split: FreeForm_174
path: data/FreeForm_174-*
- split: FreeForm_175
path: data/FreeForm_175-*
- split: FreeForm_176
path: data/FreeForm_176-*
- split: FreeForm_177
path: data/FreeForm_177-*
- split: FreeForm_178
path: data/FreeForm_178-*
- split: FreeForm_179
path: data/FreeForm_179-*
- split: FreeForm_180
path: data/FreeForm_180-*
- split: FreeForm_181
path: data/FreeForm_181-*
- split: FreeForm_182
path: data/FreeForm_182-*
- split: FreeForm_183
path: data/FreeForm_183-*
- split: FreeForm_184
path: data/FreeForm_184-*
- split: FreeForm_185
path: data/FreeForm_185-*
- split: FreeForm_186
path: data/FreeForm_186-*
- split: FreeForm_187
path: data/FreeForm_187-*
- split: FreeForm_188
path: data/FreeForm_188-*
- split: FreeForm_189
path: data/FreeForm_189-*
- split: FreeForm_190
path: data/FreeForm_190-*
- split: FreeForm_191
path: data/FreeForm_191-*
- split: FreeForm_192
path: data/FreeForm_192-*
- split: FreeForm_193
path: data/FreeForm_193-*
- split: FreeForm_194
path: data/FreeForm_194-*
- split: FreeForm_195
path: data/FreeForm_195-*
- split: FreeForm_196
path: data/FreeForm_196-*
- split: FreeForm_197
path: data/FreeForm_197-*
- split: FreeForm_198
path: data/FreeForm_198-*
- split: FreeForm_199
path: data/FreeForm_199-*
- split: FreeForm_200
path: data/FreeForm_200-*
- split: FreeForm_201
path: data/FreeForm_201-*
- split: FreeForm_202
path: data/FreeForm_202-*
- split: FreeForm_203
path: data/FreeForm_203-*
- split: FreeForm_204
path: data/FreeForm_204-*
- split: FreeForm_205
path: data/FreeForm_205-*
- split: FreeForm_206
path: data/FreeForm_206-*
- split: FreeForm_207
path: data/FreeForm_207-*
- split: FreeForm_208
path: data/FreeForm_208-*
- split: FreeForm_209
path: data/FreeForm_209-*
- split: FreeForm_210
path: data/FreeForm_210-*
- split: FreeForm_211
path: data/FreeForm_211-*
- split: FreeForm_212
path: data/FreeForm_212-*
- split: FreeForm_213
path: data/FreeForm_213-*
- split: FreeForm_214
path: data/FreeForm_214-*
- split: FreeForm_215
path: data/FreeForm_215-*
- split: FreeForm_216
path: data/FreeForm_216-*
- split: FreeForm_217
path: data/FreeForm_217-*
- split: FreeForm_218
path: data/FreeForm_218-*
- split: FreeForm_219
path: data/FreeForm_219-*
- split: FreeForm_220
path: data/FreeForm_220-*
- split: FreeForm_221
path: data/FreeForm_221-*
- split: FreeForm_222
path: data/FreeForm_222-*
- split: FreeForm_223
path: data/FreeForm_223-*
- split: FreeForm_224
path: data/FreeForm_224-*
- split: FreeForm_225
path: data/FreeForm_225-*
- split: FreeForm_226
path: data/FreeForm_226-*
- split: FreeForm_227
path: data/FreeForm_227-*
- split: FreeForm_228
path: data/FreeForm_228-*
- split: FreeForm_229
path: data/FreeForm_229-*
- split: FreeForm_230
path: data/FreeForm_230-*
- split: FreeForm_231
path: data/FreeForm_231-*
- split: FreeForm_232
path: data/FreeForm_232-*
- split: FreeForm_233
path: data/FreeForm_233-*
- split: FreeForm_234
path: data/FreeForm_234-*
- split: FreeForm_235
path: data/FreeForm_235-*
- split: FreeForm_236
path: data/FreeForm_236-*
- split: FreeForm_237
path: data/FreeForm_237-*
- split: FreeForm_238
path: data/FreeForm_238-*
- split: FreeForm_239
path: data/FreeForm_239-*
- split: FreeForm_240
path: data/FreeForm_240-*
- split: FreeForm_241
path: data/FreeForm_241-*
- split: FreeForm_242
path: data/FreeForm_242-*
- split: FreeForm_243
path: data/FreeForm_243-*
- split: FreeForm_244
path: data/FreeForm_244-*
- split: FreeForm_245
path: data/FreeForm_245-*
- split: FreeForm_246
path: data/FreeForm_246-*
- split: FreeForm_247
path: data/FreeForm_247-*
- split: FreeForm_248
path: data/FreeForm_248-*
- split: FreeForm_249
path: data/FreeForm_249-*
- split: FreeForm_250
path: data/FreeForm_250-*
- split: FreeForm_251
path: data/FreeForm_251-*
- split: FreeForm_252
path: data/FreeForm_252-*
- split: FreeForm_253
path: data/FreeForm_253-*
- split: FreeForm_254
path: data/FreeForm_254-*
- split: FreeForm_255
path: data/FreeForm_255-*
- split: FreeForm_256
path: data/FreeForm_256-*
- split: FreeForm_257
path: data/FreeForm_257-*
- split: FreeForm_258
path: data/FreeForm_258-*
- split: FreeForm_259
path: data/FreeForm_259-*
- split: FreeForm_260
path: data/FreeForm_260-*
- split: FreeForm_261
path: data/FreeForm_261-*
- split: FreeForm_262
path: data/FreeForm_262-*
- split: FreeForm_263
path: data/FreeForm_263-*
- split: FreeForm_264
path: data/FreeForm_264-*
- split: FreeForm_265
path: data/FreeForm_265-*
- split: FreeForm_266
path: data/FreeForm_266-*
- split: FreeForm_267
path: data/FreeForm_267-*
- split: FreeForm_268
path: data/FreeForm_268-*
- split: FreeForm_269
path: data/FreeForm_269-*
- split: FreeForm_270
path: data/FreeForm_270-*
- split: FreeForm_271
path: data/FreeForm_271-*
- split: FreeForm_272
path: data/FreeForm_272-*
- split: FreeForm_273
path: data/FreeForm_273-*
- split: FreeForm_274
path: data/FreeForm_274-*
- split: FreeForm_275
path: data/FreeForm_275-*
- split: FreeForm_276
path: data/FreeForm_276-*
- split: FreeForm_277
path: data/FreeForm_277-*
- split: FreeForm_278
path: data/FreeForm_278-*
- split: FreeForm_279
path: data/FreeForm_279-*
- split: FreeForm_280
path: data/FreeForm_280-*
- split: FreeForm_281
path: data/FreeForm_281-*
- split: FreeForm_282
path: data/FreeForm_282-*
- split: FreeForm_283
path: data/FreeForm_283-*
- split: FreeForm_284
path: data/FreeForm_284-*
- split: FreeForm_285
path: data/FreeForm_285-*
- split: FreeForm_286
path: data/FreeForm_286-*
- split: FreeForm_287
path: data/FreeForm_287-*
- split: FreeForm_288
path: data/FreeForm_288-*
- split: FreeForm_289
path: data/FreeForm_289-*
- split: FreeForm_290
path: data/FreeForm_290-*
- split: FreeForm_291
path: data/FreeForm_291-*
- split: FreeForm_292
path: data/FreeForm_292-*
- split: FreeForm_293
path: data/FreeForm_293-*
- split: FreeForm_294
path: data/FreeForm_294-*
- split: FreeForm_295
path: data/FreeForm_295-*
- split: FreeForm_296
path: data/FreeForm_296-*
- split: FreeForm_297
path: data/FreeForm_297-*
- split: FreeForm_298
path: data/FreeForm_298-*
- split: FreeForm_299
path: data/FreeForm_299-*
- split: FreeForm_300
path: data/FreeForm_300-*
- split: FreeForm_301
path: data/FreeForm_301-*
- split: FreeForm_302
path: data/FreeForm_302-*
- split: FreeForm_303
path: data/FreeForm_303-*
- split: FreeForm_304
path: data/FreeForm_304-*
- split: FreeForm_305
path: data/FreeForm_305-*
- split: FreeForm_306
path: data/FreeForm_306-*
- split: FreeForm_307
path: data/FreeForm_307-*
- split: FreeForm_308
path: data/FreeForm_308-*
- split: FreeForm_309
path: data/FreeForm_309-*
- split: FreeForm_310
path: data/FreeForm_310-*
- split: FreeForm_311
path: data/FreeForm_311-*
- split: FreeForm_312
path: data/FreeForm_312-*
- split: FreeForm_313
path: data/FreeForm_313-*
- split: FreeForm_314
path: data/FreeForm_314-*
- split: FreeForm_315
path: data/FreeForm_315-*
- split: FreeForm_316
path: data/FreeForm_316-*
- split: FreeForm_317
path: data/FreeForm_317-*
- split: FreeForm_318
path: data/FreeForm_318-*
- split: FreeForm_319
path: data/FreeForm_319-*
- split: FreeForm_320
path: data/FreeForm_320-*
- split: FreeForm_321
path: data/FreeForm_321-*
- split: FreeForm_322
path: data/FreeForm_322-*
- split: FreeForm_323
path: data/FreeForm_323-*
- split: FreeForm_324
path: data/FreeForm_324-*
- split: FreeForm_325
path: data/FreeForm_325-*
- split: FreeForm_326
path: data/FreeForm_326-*
- split: FreeForm_327
path: data/FreeForm_327-*
- split: FreeForm_328
path: data/FreeForm_328-*
- split: FreeForm_329
path: data/FreeForm_329-*
- split: FreeForm_330
path: data/FreeForm_330-*
- split: FreeForm_331
path: data/FreeForm_331-*
- split: FreeForm_332
path: data/FreeForm_332-*
- split: FreeForm_333
path: data/FreeForm_333-*
- split: FreeForm_334
path: data/FreeForm_334-*
- split: FreeForm_335
path: data/FreeForm_335-*
- split: FreeForm_336
path: data/FreeForm_336-*
- split: FreeForm_337
path: data/FreeForm_337-*
- split: FreeForm_338
path: data/FreeForm_338-*
- split: FreeForm_339
path: data/FreeForm_339-*
- split: FreeForm_340
path: data/FreeForm_340-*
- split: FreeForm_341
path: data/FreeForm_341-*
- split: FreeForm_342
path: data/FreeForm_342-*
- split: FreeForm_343
path: data/FreeForm_343-*
- split: FreeForm_344
path: data/FreeForm_344-*
- split: FreeForm_345
path: data/FreeForm_345-*
- split: FreeForm_346
path: data/FreeForm_346-*
- split: FreeForm_347
path: data/FreeForm_347-*
- split: FreeForm_348
path: data/FreeForm_348-*
- split: FreeForm_349
path: data/FreeForm_349-*
- split: FreeForm_350
path: data/FreeForm_350-*
- split: FreeForm_351
path: data/FreeForm_351-*
- split: FreeForm_352
path: data/FreeForm_352-*
- split: FreeForm_353
path: data/FreeForm_353-*
- split: FreeForm_354
path: data/FreeForm_354-*
- split: FreeForm_355
path: data/FreeForm_355-*
- split: FreeForm_356
path: data/FreeForm_356-*
- split: FreeForm_357
path: data/FreeForm_357-*
- split: FreeForm_358
path: data/FreeForm_358-*
- split: FreeForm_359
path: data/FreeForm_359-*
- split: FreeForm_360
path: data/FreeForm_360-*
- split: FreeForm_361
path: data/FreeForm_361-*
- split: FreeForm_362
path: data/FreeForm_362-*
- split: FreeForm_363
path: data/FreeForm_363-*
- split: FreeForm_364
path: data/FreeForm_364-*
- split: FreeForm_365
path: data/FreeForm_365-*
- split: FreeForm_366
path: data/FreeForm_366-*
- split: FreeForm_367
path: data/FreeForm_367-*
- split: FreeForm_368
path: data/FreeForm_368-*
- split: FreeForm_369
path: data/FreeForm_369-*
- split: FreeForm_370
path: data/FreeForm_370-*
- split: FreeForm_371
path: data/FreeForm_371-*
- split: FreeForm_372
path: data/FreeForm_372-*
- split: FreeForm_373
path: data/FreeForm_373-*
- split: FreeForm_374
path: data/FreeForm_374-*
- split: FreeForm_375
path: data/FreeForm_375-*
- split: FreeForm_376
path: data/FreeForm_376-*
- split: FreeForm_377
path: data/FreeForm_377-*
- split: FreeForm_378
path: data/FreeForm_378-*
- split: FreeForm_379
path: data/FreeForm_379-*
- split: FreeForm_380
path: data/FreeForm_380-*
- split: FreeForm_381
path: data/FreeForm_381-*
- split: FreeForm_382
path: data/FreeForm_382-*
- split: FreeForm_383
path: data/FreeForm_383-*
- split: FreeForm_384
path: data/FreeForm_384-*
- split: FreeForm_385
path: data/FreeForm_385-*
- split: FreeForm_386
path: data/FreeForm_386-*
- split: FreeForm_387
path: data/FreeForm_387-*
- split: FreeForm_388
path: data/FreeForm_388-*
- split: FreeForm_389
path: data/FreeForm_389-*
- split: FreeForm_390
path: data/FreeForm_390-*
- split: FreeForm_391
path: data/FreeForm_391-*
- split: FreeForm_392
path: data/FreeForm_392-*
- split: FreeForm_393
path: data/FreeForm_393-*
- split: FreeForm_394
path: data/FreeForm_394-*
- split: FreeForm_395
path: data/FreeForm_395-*
- split: FreeForm_396
path: data/FreeForm_396-*
- split: FreeForm_397
path: data/FreeForm_397-*
- split: FreeForm_398
path: data/FreeForm_398-*
- split: FreeForm_399
path: data/FreeForm_399-*
- split: FreeForm_400
path: data/FreeForm_400-*
- split: FreeForm_401
path: data/FreeForm_401-*
- split: FreeForm_402
path: data/FreeForm_402-*
- split: FreeForm_403
path: data/FreeForm_403-*
- split: FreeForm_404
path: data/FreeForm_404-*
- split: FreeForm_405
path: data/FreeForm_405-*
- split: FreeForm_406
path: data/FreeForm_406-*
- split: FreeForm_407
path: data/FreeForm_407-*
- split: FreeForm_408
path: data/FreeForm_408-*
- split: FreeForm_409
path: data/FreeForm_409-*
- split: FreeForm_410
path: data/FreeForm_410-*
- split: FreeForm_411
path: data/FreeForm_411-*
- split: FreeForm_412
path: data/FreeForm_412-*
- split: FreeForm_413
path: data/FreeForm_413-*
- split: FreeForm_414
path: data/FreeForm_414-*
- split: FreeForm_415
path: data/FreeForm_415-*
- split: FreeForm_416
path: data/FreeForm_416-*
- split: FreeForm_417
path: data/FreeForm_417-*
- split: FreeForm_418
path: data/FreeForm_418-*
- split: FreeForm_419
path: data/FreeForm_419-*
- split: FreeForm_420
path: data/FreeForm_420-*
- split: FreeForm_421
path: data/FreeForm_421-*
- split: FreeForm_422
path: data/FreeForm_422-*
- split: FreeForm_423
path: data/FreeForm_423-*
- split: FreeForm_424
path: data/FreeForm_424-*
- split: FreeForm_425
path: data/FreeForm_425-*
- split: FreeForm_426
path: data/FreeForm_426-*
- split: FreeForm_427
path: data/FreeForm_427-*
- split: FreeForm_428
path: data/FreeForm_428-*
- split: FreeForm_429
path: data/FreeForm_429-*
- split: FreeForm_430
path: data/FreeForm_430-*
- split: FreeForm_431
path: data/FreeForm_431-*
- split: FreeForm_432
path: data/FreeForm_432-*
- split: FreeForm_433
path: data/FreeForm_433-*
- split: FreeForm_434
path: data/FreeForm_434-*
- split: FreeForm_435
path: data/FreeForm_435-*
- split: FreeForm_436
path: data/FreeForm_436-*
- split: FreeForm_437
path: data/FreeForm_437-*
- split: FreeForm_438
path: data/FreeForm_438-*
- split: FreeForm_439
path: data/FreeForm_439-*
- split: FreeForm_440
path: data/FreeForm_440-*
- split: FreeForm_441
path: data/FreeForm_441-*
- split: FreeForm_442
path: data/FreeForm_442-*
- split: FreeForm_443
path: data/FreeForm_443-*
- split: FreeForm_444
path: data/FreeForm_444-*
- split: FreeForm_445
path: data/FreeForm_445-*
- split: FreeForm_446
path: data/FreeForm_446-*
- split: FreeForm_447
path: data/FreeForm_447-*
- split: FreeForm_448
path: data/FreeForm_448-*
- split: FreeForm_449
path: data/FreeForm_449-*
- split: FreeForm_450
path: data/FreeForm_450-*
- split: FreeForm_451
path: data/FreeForm_451-*
- split: FreeForm_452
path: data/FreeForm_452-*
- split: FreeForm_453
path: data/FreeForm_453-*
- split: FreeForm_454
path: data/FreeForm_454-*
- split: FreeForm_455
path: data/FreeForm_455-*
- split: FreeForm_456
path: data/FreeForm_456-*
- split: FreeForm_457
path: data/FreeForm_457-*
- split: FreeForm_458
path: data/FreeForm_458-*
- split: FreeForm_459
path: data/FreeForm_459-*
- split: FreeForm_460
path: data/FreeForm_460-*
- split: FreeForm_461
path: data/FreeForm_461-*
- split: FreeForm_462
path: data/FreeForm_462-*
- split: FreeForm_463
path: data/FreeForm_463-*
- split: FreeForm_464
path: data/FreeForm_464-*
- split: FreeForm_465
path: data/FreeForm_465-*
- split: FreeForm_466
path: data/FreeForm_466-*
- split: FreeForm_467
path: data/FreeForm_467-*
- split: FreeForm_468
path: data/FreeForm_468-*
- split: FreeForm_469
path: data/FreeForm_469-*
- split: FreeForm_470
path: data/FreeForm_470-*
- split: FreeForm_471
path: data/FreeForm_471-*
- split: FreeForm_472
path: data/FreeForm_472-*
- split: FreeForm_473
path: data/FreeForm_473-*
- split: FreeForm_474
path: data/FreeForm_474-*
- split: FreeForm_475
path: data/FreeForm_475-*
- split: FreeForm_476
path: data/FreeForm_476-*
- split: FreeForm_477
path: data/FreeForm_477-*
- split: FreeForm_478
path: data/FreeForm_478-*
- split: FreeForm_479
path: data/FreeForm_479-*
- split: FreeForm_480
path: data/FreeForm_480-*
- split: FreeForm_481
path: data/FreeForm_481-*
- split: FreeForm_482
path: data/FreeForm_482-*
- split: FreeForm_483
path: data/FreeForm_483-*
- split: FreeForm_484
path: data/FreeForm_484-*
- split: FreeForm_485
path: data/FreeForm_485-*
- split: FreeForm_486
path: data/FreeForm_486-*
- split: FreeForm_487
path: data/FreeForm_487-*
- split: FreeForm_488
path: data/FreeForm_488-*
- split: FreeForm_489
path: data/FreeForm_489-*
- split: FreeForm_490
path: data/FreeForm_490-*
- split: FreeForm_491
path: data/FreeForm_491-*
- split: FreeForm_492
path: data/FreeForm_492-*
- split: FreeForm_493
path: data/FreeForm_493-*
- split: FreeForm_494
path: data/FreeForm_494-*
- split: FreeForm_495
path: data/FreeForm_495-*
- split: FreeForm_496
path: data/FreeForm_496-*
- split: FreeForm_497
path: data/FreeForm_497-*
- split: FreeForm_498
path: data/FreeForm_498-*
- split: FreeForm_499
path: data/FreeForm_499-*
- split: FreeForm_500
path: data/FreeForm_500-*
- split: FreeForm_501
path: data/FreeForm_501-*
- split: FreeForm_502
path: data/FreeForm_502-*
- split: FreeForm_503
path: data/FreeForm_503-*
- split: FreeForm_504
path: data/FreeForm_504-*
- split: FreeForm_505
path: data/FreeForm_505-*
- split: FreeForm_506
path: data/FreeForm_506-*
- split: FreeForm_507
path: data/FreeForm_507-*
- split: FreeForm_508
path: data/FreeForm_508-*
- split: FreeForm_509
path: data/FreeForm_509-*
- split: FreeForm_510
path: data/FreeForm_510-*
- split: FreeForm_511
path: data/FreeForm_511-*
- split: FreeForm_512
path: data/FreeForm_512-*
- split: FreeForm_513
path: data/FreeForm_513-*
- split: FreeForm_514
path: data/FreeForm_514-*
- split: FreeForm_515
path: data/FreeForm_515-*
- split: FreeForm_945
path: data/FreeForm_945-*
- split: FreeForm_819
path: data/FreeForm_819-*
- split: FreeForm_756
path: data/FreeForm_756-*
- split: FreeForm_693
path: data/FreeForm_693-*
- split: FreeForm_567
path: data/FreeForm_567-*
- split: FreeForm_516
path: data/FreeForm_516-*
- split: FreeForm_630
path: data/FreeForm_630-*
- split: FreeForm_694
path: data/FreeForm_694-*
- split: FreeForm_757
path: data/FreeForm_757-*
- split: FreeForm_882
path: data/FreeForm_882-*
- split: FreeForm_517
path: data/FreeForm_517-*
- split: FreeForm_568
path: data/FreeForm_568-*
- split: FreeForm_695
path: data/FreeForm_695-*
- split: FreeForm_883
path: data/FreeForm_883-*
- split: FreeForm_946
path: data/FreeForm_946-*
- split: FreeForm_758
path: data/FreeForm_758-*
- split: FreeForm_820
path: data/FreeForm_820-*
- split: FreeForm_518
path: data/FreeForm_518-*
- split: FreeForm_696
path: data/FreeForm_696-*
- split: FreeForm_631
path: data/FreeForm_631-*
- split: FreeForm_884
path: data/FreeForm_884-*
- split: FreeForm_947
path: data/FreeForm_947-*
- split: FreeForm_570
path: data/FreeForm_570-*
- split: FreeForm_759
path: data/FreeForm_759-*
- split: FreeForm_519
path: data/FreeForm_519-*
- split: FreeForm_821
path: data/FreeForm_821-*
- split: FreeForm_697
path: data/FreeForm_697-*
- split: FreeForm_885
path: data/FreeForm_885-*
- split: FreeForm_520
path: data/FreeForm_520-*
- split: FreeForm_632
path: data/FreeForm_632-*
- split: FreeForm_760
path: data/FreeForm_760-*
- split: FreeForm_571
path: data/FreeForm_571-*
- split: FreeForm_948
path: data/FreeForm_948-*
- split: FreeForm_886
path: data/FreeForm_886-*
- split: FreeForm_822
path: data/FreeForm_822-*
- split: FreeForm_698
path: data/FreeForm_698-*
- split: FreeForm_521
path: data/FreeForm_521-*
- split: FreeForm_761
path: data/FreeForm_761-*
- split: FreeForm_633
path: data/FreeForm_633-*
- split: FreeForm_949
path: data/FreeForm_949-*
- split: FreeForm_823
path: data/FreeForm_823-*
- split: FreeForm_572
path: data/FreeForm_572-*
- split: FreeForm_699
path: data/FreeForm_699-*
- split: FreeForm_522
path: data/FreeForm_522-*
- split: FreeForm_762
path: data/FreeForm_762-*
- split: FreeForm_950
path: data/FreeForm_950-*
- split: FreeForm_824
path: data/FreeForm_824-*
- split: FreeForm_700
path: data/FreeForm_700-*
- split: FreeForm_523
path: data/FreeForm_523-*
- split: FreeForm_634
path: data/FreeForm_634-*
- split: FreeForm_763
path: data/FreeForm_763-*
- split: FreeForm_951
path: data/FreeForm_951-*
- split: FreeForm_889
path: data/FreeForm_889-*
- split: FreeForm_701
path: data/FreeForm_701-*
- split: FreeForm_635
path: data/FreeForm_635-*
- split: FreeForm_764
path: data/FreeForm_764-*
- split: FreeForm_952
path: data/FreeForm_952-*
- split: FreeForm_525
path: data/FreeForm_525-*
- split: FreeForm_890
path: data/FreeForm_890-*
- split: FreeForm_636
path: data/FreeForm_636-*
- split: FreeForm_826
path: data/FreeForm_826-*
- split: FreeForm_765
path: data/FreeForm_765-*
- split: FreeForm_953
path: data/FreeForm_953-*
- split: FreeForm_526
path: data/FreeForm_526-*
- split: FreeForm_576
path: data/FreeForm_576-*
- split: FreeForm_637
path: data/FreeForm_637-*
- split: FreeForm_891
path: data/FreeForm_891-*
- split: FreeForm_703
path: data/FreeForm_703-*
- split: FreeForm_527
path: data/FreeForm_527-*
- split: FreeForm_704
path: data/FreeForm_704-*
- split: FreeForm_577
path: data/FreeForm_577-*
- split: FreeForm_828
path: data/FreeForm_828-*
- split: FreeForm_767
path: data/FreeForm_767-*
- split: FreeForm_892
path: data/FreeForm_892-*
- split: FreeForm_955
path: data/FreeForm_955-*
- split: FreeForm_528
path: data/FreeForm_528-*
- split: FreeForm_705
path: data/FreeForm_705-*
- split: FreeForm_768
path: data/FreeForm_768-*
- split: FreeForm_829
path: data/FreeForm_829-*
- split: FreeForm_639
path: data/FreeForm_639-*
- split: FreeForm_893
path: data/FreeForm_893-*
- split: FreeForm_706
path: data/FreeForm_706-*
- split: FreeForm_769
path: data/FreeForm_769-*
- split: FreeForm_640
path: data/FreeForm_640-*
- split: FreeForm_830
path: data/FreeForm_830-*
- split: FreeForm_894
path: data/FreeForm_894-*
- split: FreeForm_957
path: data/FreeForm_957-*
- split: FreeForm_707
path: data/FreeForm_707-*
- split: FreeForm_530
path: data/FreeForm_530-*
- split: FreeForm_770
path: data/FreeForm_770-*
- split: FreeForm_641
path: data/FreeForm_641-*
- split: FreeForm_831
path: data/FreeForm_831-*
- split: FreeForm_958
path: data/FreeForm_958-*
- split: FreeForm_895
path: data/FreeForm_895-*
- split: FreeForm_578
path: data/FreeForm_578-*
- split: FreeForm_642
path: data/FreeForm_642-*
- split: FreeForm_832
path: data/FreeForm_832-*
- split: FreeForm_959
path: data/FreeForm_959-*
- split: FreeForm_896
path: data/FreeForm_896-*
- split: FreeForm_532
path: data/FreeForm_532-*
- split: FreeForm_579
path: data/FreeForm_579-*
- split: FreeForm_772
path: data/FreeForm_772-*
- split: FreeForm_897
path: data/FreeForm_897-*
- split: FreeForm_833
path: data/FreeForm_833-*
- split: FreeForm_533
path: data/FreeForm_533-*
- split: FreeForm_580
path: data/FreeForm_580-*
- split: FreeForm_644
path: data/FreeForm_644-*
- split: FreeForm_898
path: data/FreeForm_898-*
- split: FreeForm_834
path: data/FreeForm_834-*
- split: FreeForm_534
path: data/FreeForm_534-*
- split: FreeForm_774
path: data/FreeForm_774-*
- split: FreeForm_962
path: data/FreeForm_962-*
- split: FreeForm_835
path: data/FreeForm_835-*
- split: FreeForm_899
path: data/FreeForm_899-*
- split: FreeForm_581
path: data/FreeForm_581-*
- split: FreeForm_645
path: data/FreeForm_645-*
- split: FreeForm_535
path: data/FreeForm_535-*
- split: FreeForm_711
path: data/FreeForm_711-*
- split: FreeForm_775
path: data/FreeForm_775-*
- split: FreeForm_536
path: data/FreeForm_536-*
- split: FreeForm_836
path: data/FreeForm_836-*
- split: FreeForm_963
path: data/FreeForm_963-*
- split: FreeForm_900
path: data/FreeForm_900-*
- split: FreeForm_582
path: data/FreeForm_582-*
- split: FreeForm_537
path: data/FreeForm_537-*
- split: FreeForm_647
path: data/FreeForm_647-*
- split: FreeForm_837
path: data/FreeForm_837-*
- split: FreeForm_964
path: data/FreeForm_964-*
- split: FreeForm_583
path: data/FreeForm_583-*
- split: FreeForm_648
path: data/FreeForm_648-*
- split: FreeForm_714
path: data/FreeForm_714-*
- split: FreeForm_902
path: data/FreeForm_902-*
- split: FreeForm_966
path: data/FreeForm_966-*
- split: FreeForm_839
path: data/FreeForm_839-*
- split: FreeForm_840
path: data/FreeForm_840-*
- split: FreeForm_780
path: data/FreeForm_780-*
- split: FreeForm_905
path: data/FreeForm_905-*
- split: FreeForm_781
path: data/FreeForm_781-*
- split: FreeForm_542
path: data/FreeForm_542-*
- split: FreeForm_717
path: data/FreeForm_717-*
- split: FreeForm_587
path: data/FreeForm_587-*
- split: FreeForm_906
path: data/FreeForm_906-*
- split: FreeForm_782
path: data/FreeForm_782-*
- split: FreeForm_543
path: data/FreeForm_543-*
- split: FreeForm_970
path: data/FreeForm_970-*
- split: FreeForm_653
path: data/FreeForm_653-*
- split: FreeForm_907
path: data/FreeForm_907-*
- split: FreeForm_843
path: data/FreeForm_843-*
- split: FreeForm_588
path: data/FreeForm_588-*
- split: FreeForm_718
path: data/FreeForm_718-*
- split: FreeForm_783
path: data/FreeForm_783-*
- split: FreeForm_544
path: data/FreeForm_544-*
- split: FreeForm_971
path: data/FreeForm_971-*
- split: FreeForm_908
path: data/FreeForm_908-*
- split: FreeForm_654
path: data/FreeForm_654-*
- split: FreeForm_844
path: data/FreeForm_844-*
- split: FreeForm_719
path: data/FreeForm_719-*
- split: FreeForm_784
path: data/FreeForm_784-*
- split: FreeForm_545
path: data/FreeForm_545-*
- split: FreeForm_972
path: data/FreeForm_972-*
- split: FreeForm_909
path: data/FreeForm_909-*
- split: FreeForm_845
path: data/FreeForm_845-*
- split: FreeForm_785
path: data/FreeForm_785-*
- split: FreeForm_546
path: data/FreeForm_546-*
- split: FreeForm_656
path: data/FreeForm_656-*
- split: FreeForm_973
path: data/FreeForm_973-*
- split: FreeForm_547
path: data/FreeForm_547-*
- split: FreeForm_592
path: data/FreeForm_592-*
- split: FreeForm_657
path: data/FreeForm_657-*
- split: FreeForm_787
path: data/FreeForm_787-*
- split: FreeForm_847
path: data/FreeForm_847-*
- split: FreeForm_593
path: data/FreeForm_593-*
- split: FreeForm_848
path: data/FreeForm_848-*
- split: FreeForm_788
path: data/FreeForm_788-*
- split: FreeForm_723
path: data/FreeForm_723-*
- split: FreeForm_659
path: data/FreeForm_659-*
- split: FreeForm_849
path: data/FreeForm_849-*
- split: FreeForm_594
path: data/FreeForm_594-*
- split: FreeForm_789
path: data/FreeForm_789-*
- split: FreeForm_913
path: data/FreeForm_913-*
- split: FreeForm_660
path: data/FreeForm_660-*
- split: FreeForm_595
path: data/FreeForm_595-*
- split: FreeForm_790
path: data/FreeForm_790-*
- split: FreeForm_977
path: data/FreeForm_977-*
- split: FreeForm_914
path: data/FreeForm_914-*
- split: FreeForm_851
path: data/FreeForm_851-*
- split: FreeForm_552
path: data/FreeForm_552-*
- split: FreeForm_597
path: data/FreeForm_597-*
- split: FreeForm_852
path: data/FreeForm_852-*
- split: FreeForm_662
path: data/FreeForm_662-*
- split: FreeForm_726
path: data/FreeForm_726-*
- split: FreeForm_553
path: data/FreeForm_553-*
- split: FreeForm_598
path: data/FreeForm_598-*
- split: FreeForm_853
path: data/FreeForm_853-*
- split: FreeForm_916
path: data/FreeForm_916-*
- split: FreeForm_663
path: data/FreeForm_663-*
- split: FreeForm_979
path: data/FreeForm_979-*
- split: FreeForm_554
path: data/FreeForm_554-*
- split: FreeForm_555
path: data/FreeForm_555-*
- split: FreeForm_600
path: data/FreeForm_600-*
- split: FreeForm_556
path: data/FreeForm_556-*
- split: FreeForm_981
path: data/FreeForm_981-*
- split: FreeForm_918
path: data/FreeForm_918-*
- split: FreeForm_855
path: data/FreeForm_855-*
- split: FreeForm_601
path: data/FreeForm_601-*
- split: FreeForm_557
path: data/FreeForm_557-*
- split: FreeForm_982
path: data/FreeForm_982-*
- split: FreeForm_919
path: data/FreeForm_919-*
- split: FreeForm_666
path: data/FreeForm_666-*
- split: FreeForm_730
path: data/FreeForm_730-*
- split: FreeForm_558
path: data/FreeForm_558-*
- split: FreeForm_796
path: data/FreeForm_796-*
- split: FreeForm_920
path: data/FreeForm_920-*
- split: FreeForm_603
path: data/FreeForm_603-*
- split: FreeForm_797
path: data/FreeForm_797-*
- split: FreeForm_560
path: data/FreeForm_560-*
- split: FreeForm_798
path: data/FreeForm_798-*
- split: FreeForm_799
path: data/FreeForm_799-*
- split: FreeForm_605
path: data/FreeForm_605-*
- split: FreeForm_986
path: data/FreeForm_986-*
- split: FreeForm_987
path: data/FreeForm_987-*
- split: FreeForm_735
path: data/FreeForm_735-*
- split: FreeForm_924
path: data/FreeForm_924-*
- split: FreeForm_801
path: data/FreeForm_801-*
- split: FreeForm_988
path: data/FreeForm_988-*
- split: FreeForm_607
path: data/FreeForm_607-*
- split: FreeForm_736
path: data/FreeForm_736-*
- split: FreeForm_672
path: data/FreeForm_672-*
- split: FreeForm_925
path: data/FreeForm_925-*
- split: FreeForm_564
path: data/FreeForm_564-*
- split: FreeForm_608
path: data/FreeForm_608-*
- split: FreeForm_737
path: data/FreeForm_737-*
- split: FreeForm_673
path: data/FreeForm_673-*
- split: FreeForm_803
path: data/FreeForm_803-*
- split: FreeForm_926
path: data/FreeForm_926-*
- split: FreeForm_863
path: data/FreeForm_863-*
- split: FreeForm_738
path: data/FreeForm_738-*
- split: FreeForm_674
path: data/FreeForm_674-*
- split: FreeForm_804
path: data/FreeForm_804-*
- split: FreeForm_927
path: data/FreeForm_927-*
- split: FreeForm_864
path: data/FreeForm_864-*
- split: FreeForm_675
path: data/FreeForm_675-*
- split: FreeForm_805
path: data/FreeForm_805-*
- split: FreeForm_611
path: data/FreeForm_611-*
- split: FreeForm_928
path: data/FreeForm_928-*
- split: FreeForm_676
path: data/FreeForm_676-*
- split: FreeForm_865
path: data/FreeForm_865-*
- split: FreeForm_806
path: data/FreeForm_806-*
- split: FreeForm_929
path: data/FreeForm_929-*
- split: FreeForm_993
path: data/FreeForm_993-*
- split: FreeForm_866
path: data/FreeForm_866-*
- split: FreeForm_678
path: data/FreeForm_678-*
- split: FreeForm_930
path: data/FreeForm_930-*
- split: FreeForm_994
path: data/FreeForm_994-*
- split: FreeForm_867
path: data/FreeForm_867-*
- split: FreeForm_807
path: data/FreeForm_807-*
- split: FreeForm_1011
path: data/FreeForm_1011-*
- split: FreeForm_931
path: data/FreeForm_931-*
- split: FreeForm_808
path: data/FreeForm_808-*
- split: FreeForm_743
path: data/FreeForm_743-*
- split: FreeForm_995
path: data/FreeForm_995-*
- split: FreeForm_809
path: data/FreeForm_809-*
- split: FreeForm_1012
path: data/FreeForm_1012-*
- split: FreeForm_869
path: data/FreeForm_869-*
- split: FreeForm_810
path: data/FreeForm_810-*
- split: FreeForm_616
path: data/FreeForm_616-*
- split: FreeForm_870
path: data/FreeForm_870-*
- split: FreeForm_933
path: data/FreeForm_933-*
- split: FreeForm_811
path: data/FreeForm_811-*
- split: FreeForm_617
path: data/FreeForm_617-*
- split: FreeForm_1014
path: data/FreeForm_1014-*
- split: FreeForm_934
path: data/FreeForm_934-*
- split: FreeForm_871
path: data/FreeForm_871-*
- split: FreeForm_682
path: data/FreeForm_682-*
- split: FreeForm_812
path: data/FreeForm_812-*
- split: FreeForm_1015
path: data/FreeForm_1015-*
- split: FreeForm_747
path: data/FreeForm_747-*
- split: FreeForm_683
path: data/FreeForm_683-*
- split: FreeForm_872
path: data/FreeForm_872-*
- split: FreeForm_1016
path: data/FreeForm_1016-*
- split: FreeForm_619
path: data/FreeForm_619-*
- split: FreeForm_748
path: data/FreeForm_748-*
- split: FreeForm_996
path: data/FreeForm_996-*
- split: FreeForm_936
path: data/FreeForm_936-*
- split: FreeForm_873
path: data/FreeForm_873-*
- split: FreeForm_814
path: data/FreeForm_814-*
- split: FreeForm_620
path: data/FreeForm_620-*
- split: FreeForm_937
path: data/FreeForm_937-*
- split: FreeForm_874
path: data/FreeForm_874-*
- split: FreeForm_815
path: data/FreeForm_815-*
- split: FreeForm_685
path: data/FreeForm_685-*
- split: FreeForm_750
path: data/FreeForm_750-*
- split: FreeForm_998
path: data/FreeForm_998-*
- split: FreeForm_938
path: data/FreeForm_938-*
- split: FreeForm_816
path: data/FreeForm_816-*
- split: FreeForm_622
path: data/FreeForm_622-*
- split: FreeForm_751
path: data/FreeForm_751-*
- split: FreeForm_876
path: data/FreeForm_876-*
- split: FreeForm_939
path: data/FreeForm_939-*
- split: FreeForm_817
path: data/FreeForm_817-*
- split: FreeForm_752
path: data/FreeForm_752-*
- split: FreeForm_1020
path: data/FreeForm_1020-*
- split: FreeForm_624
path: data/FreeForm_624-*
- split: FreeForm_1001
path: data/FreeForm_1001-*
- split: FreeForm_1071
path: data/FreeForm_1071-*
- split: FreeForm_1072
path: data/FreeForm_1072-*
- split: FreeForm_1022
path: data/FreeForm_1022-*
- split: FreeForm_755
path: data/FreeForm_755-*
- split: FreeForm_626
path: data/FreeForm_626-*
- split: FreeForm_690
path: data/FreeForm_690-*
- split: FreeForm_1003
path: data/FreeForm_1003-*
- split: FreeForm_1023
path: data/FreeForm_1023-*
- split: FreeForm_880
path: data/FreeForm_880-*
- split: FreeForm_627
path: data/FreeForm_627-*
- split: FreeForm_1004
path: data/FreeForm_1004-*
- split: FreeForm_1074
path: data/FreeForm_1074-*
- split: FreeForm_1024
path: data/FreeForm_1024-*
- split: FreeForm_944
path: data/FreeForm_944-*
- split: FreeForm_881
path: data/FreeForm_881-*
- split: FreeForm_1135
path: data/FreeForm_1135-*
- split: FreeForm_692
path: data/FreeForm_692-*
- split: FreeForm_1075
path: data/FreeForm_1075-*
- split: FreeForm_1025
path: data/FreeForm_1025-*
- split: FreeForm_1197
path: data/FreeForm_1197-*
- split: FreeForm_1260
path: data/FreeForm_1260-*
- split: FreeForm_629
path: data/FreeForm_629-*
- split: FreeForm_1136
path: data/FreeForm_1136-*
- split: FreeForm_1006
path: data/FreeForm_1006-*
- split: FreeForm_1261
path: data/FreeForm_1261-*
- split: FreeForm_1198
path: data/FreeForm_1198-*
- split: FreeForm_1386
path: data/FreeForm_1386-*
- split: FreeForm_1137
path: data/FreeForm_1137-*
- split: FreeForm_1007
path: data/FreeForm_1007-*
- split: FreeForm_1077
path: data/FreeForm_1077-*
- split: FreeForm_1262
path: data/FreeForm_1262-*
- split: FreeForm_1324
path: data/FreeForm_1324-*
- split: FreeForm_1387
path: data/FreeForm_1387-*
- split: FreeForm_1138
path: data/FreeForm_1138-*
- split: FreeForm_1449
path: data/FreeForm_1449-*
- split: FreeForm_1200
path: data/FreeForm_1200-*
- split: FreeForm_1388
path: data/FreeForm_1388-*
- split: FreeForm_1078
path: data/FreeForm_1078-*
- split: FreeForm_1139
path: data/FreeForm_1139-*
- split: FreeForm_1450
path: data/FreeForm_1450-*
- split: FreeForm_1326
path: data/FreeForm_1326-*
- split: FreeForm_1201
path: data/FreeForm_1201-*
- split: FreeForm_1389
path: data/FreeForm_1389-*
- split: FreeForm_1264
path: data/FreeForm_1264-*
- split: FreeForm_1140
path: data/FreeForm_1140-*
- split: FreeForm_1451
path: data/FreeForm_1451-*
- split: FreeForm_1327
path: data/FreeForm_1327-*
- split: FreeForm_1202
path: data/FreeForm_1202-*
- split: FreeForm_1030
path: data/FreeForm_1030-*
- split: FreeForm_1390
path: data/FreeForm_1390-*
- split: FreeForm_1080
path: data/FreeForm_1080-*
- split: FreeForm_1141
path: data/FreeForm_1141-*
- split: FreeForm_1452
path: data/FreeForm_1452-*
- split: FreeForm_1328
path: data/FreeForm_1328-*
- split: FreeForm_1203
path: data/FreeForm_1203-*
- split: FreeForm_1391
path: data/FreeForm_1391-*
- split: FreeForm_1142
path: data/FreeForm_1142-*
- split: FreeForm_1329
path: data/FreeForm_1329-*
- split: FreeForm_1032
path: data/FreeForm_1032-*
- split: FreeForm_1392
path: data/FreeForm_1392-*
- split: FreeForm_1143
path: data/FreeForm_1143-*
- split: FreeForm_1266
path: data/FreeForm_1266-*
- split: FreeForm_1454
path: data/FreeForm_1454-*
- split: FreeForm_1205
path: data/FreeForm_1205-*
- split: FreeForm_1033
path: data/FreeForm_1033-*
- split: FreeForm_1331
path: data/FreeForm_1331-*
- split: FreeForm_1455
path: data/FreeForm_1455-*
- split: FreeForm_1084
path: data/FreeForm_1084-*
- split: FreeForm_1394
path: data/FreeForm_1394-*
- split: FreeForm_1034
path: data/FreeForm_1034-*
- split: FreeForm_1332
path: data/FreeForm_1332-*
- split: FreeForm_1456
path: data/FreeForm_1456-*
- split: FreeForm_1268
path: data/FreeForm_1268-*
- split: FreeForm_1207
path: data/FreeForm_1207-*
- split: FreeForm_1395
path: data/FreeForm_1395-*
- split: FreeForm_1035
path: data/FreeForm_1035-*
- split: FreeForm_1333
path: data/FreeForm_1333-*
- split: FreeForm_1457
path: data/FreeForm_1457-*
- split: FreeForm_1086
path: data/FreeForm_1086-*
- split: FreeForm_1147
path: data/FreeForm_1147-*
- split: FreeForm_1396
path: data/FreeForm_1396-*
- split: FreeForm_1334
path: data/FreeForm_1334-*
- split: FreeForm_1458
path: data/FreeForm_1458-*
- split: FreeForm_1087
path: data/FreeForm_1087-*
- split: FreeForm_1148
path: data/FreeForm_1148-*
- split: FreeForm_1397
path: data/FreeForm_1397-*
- split: FreeForm_1335
path: data/FreeForm_1335-*
- split: FreeForm_1459
path: data/FreeForm_1459-*
- split: FreeForm_1271
path: data/FreeForm_1271-*
- split: FreeForm_1149
path: data/FreeForm_1149-*
- split: FreeForm_1210
path: data/FreeForm_1210-*
- split: FreeForm_1150
path: data/FreeForm_1150-*
- split: FreeForm_1272
path: data/FreeForm_1272-*
- split: FreeForm_1461
path: data/FreeForm_1461-*
- split: FreeForm_1151
path: data/FreeForm_1151-*
- split: FreeForm_1273
path: data/FreeForm_1273-*
- split: FreeForm_1212
path: data/FreeForm_1212-*
- split: FreeForm_1090
path: data/FreeForm_1090-*
- split: FreeForm_1400
path: data/FreeForm_1400-*
- split: FreeForm_1152
path: data/FreeForm_1152-*
- split: FreeForm_1274
path: data/FreeForm_1274-*
- split: FreeForm_1091
path: data/FreeForm_1091-*
- split: FreeForm_1401
path: data/FreeForm_1401-*
- split: FreeForm_1153
path: data/FreeForm_1153-*
- split: FreeForm_1275
path: data/FreeForm_1275-*
- split: FreeForm_1214
path: data/FreeForm_1214-*
- split: FreeForm_1464
path: data/FreeForm_1464-*
- split: FreeForm_1340
path: data/FreeForm_1340-*
- split: FreeForm_1043
path: data/FreeForm_1043-*
- split: FreeForm_1276
path: data/FreeForm_1276-*
- split: FreeForm_1403
path: data/FreeForm_1403-*
- split: FreeForm_1215
path: data/FreeForm_1215-*
- split: FreeForm_1093
path: data/FreeForm_1093-*
- split: FreeForm_1044
path: data/FreeForm_1044-*
- split: FreeForm_1277
path: data/FreeForm_1277-*
- split: FreeForm_1216
path: data/FreeForm_1216-*
- split: FreeForm_1094
path: data/FreeForm_1094-*
- split: FreeForm_1278
path: data/FreeForm_1278-*
- split: FreeForm_1217
path: data/FreeForm_1217-*
- split: FreeForm_1405
path: data/FreeForm_1405-*
- split: FreeForm_1467
path: data/FreeForm_1467-*
- split: FreeForm_1157
path: data/FreeForm_1157-*
- split: FreeForm_1406
path: data/FreeForm_1406-*
- split: FreeForm_1343
path: data/FreeForm_1343-*
- split: FreeForm_1218
path: data/FreeForm_1218-*
- split: FreeForm_1468
path: data/FreeForm_1468-*
- split: FreeForm_1158
path: data/FreeForm_1158-*
- split: FreeForm_1407
path: data/FreeForm_1407-*
- split: FreeForm_1344
path: data/FreeForm_1344-*
- split: FreeForm_1047
path: data/FreeForm_1047-*
- split: FreeForm_1219
path: data/FreeForm_1219-*
- split: FreeForm_1469
path: data/FreeForm_1469-*
- split: FreeForm_1345
path: data/FreeForm_1345-*
- split: FreeForm_1281
path: data/FreeForm_1281-*
- split: FreeForm_1220
path: data/FreeForm_1220-*
- split: FreeForm_1048
path: data/FreeForm_1048-*
- split: FreeForm_1098
path: data/FreeForm_1098-*
- split: FreeForm_1160
path: data/FreeForm_1160-*
- split: FreeForm_1346
path: data/FreeForm_1346-*
- split: FreeForm_1282
path: data/FreeForm_1282-*
- split: FreeForm_1471
path: data/FreeForm_1471-*
- split: FreeForm_1410
path: data/FreeForm_1410-*
- split: FreeForm_1472
path: data/FreeForm_1472-*
- split: FreeForm_1284
path: data/FreeForm_1284-*
- split: FreeForm_1348
path: data/FreeForm_1348-*
- split: FreeForm_1223
path: data/FreeForm_1223-*
- split: FreeForm_1163
path: data/FreeForm_1163-*
- split: FreeForm_1473
path: data/FreeForm_1473-*
- split: FreeForm_1285
path: data/FreeForm_1285-*
- split: FreeForm_1349
path: data/FreeForm_1349-*
- split: FreeForm_1101
path: data/FreeForm_1101-*
- split: FreeForm_1224
path: data/FreeForm_1224-*
- split: FreeForm_1164
path: data/FreeForm_1164-*
- split: FreeForm_1413
path: data/FreeForm_1413-*
- split: FreeForm_1225
path: data/FreeForm_1225-*
- split: FreeForm_1286
path: data/FreeForm_1286-*
- split: FreeForm_1165
path: data/FreeForm_1165-*
- split: FreeForm_1414
path: data/FreeForm_1414-*
- split: FreeForm_1053
path: data/FreeForm_1053-*
- split: FreeForm_1287
path: data/FreeForm_1287-*
- split: FreeForm_1351
path: data/FreeForm_1351-*
- split: FreeForm_1166
path: data/FreeForm_1166-*
- split: FreeForm_1415
path: data/FreeForm_1415-*
- split: FreeForm_1227
path: data/FreeForm_1227-*
- split: FreeForm_1054
path: data/FreeForm_1054-*
- split: FreeForm_1167
path: data/FreeForm_1167-*
- split: FreeForm_1288
path: data/FreeForm_1288-*
- split: FreeForm_1476
path: data/FreeForm_1476-*
- split: FreeForm_1416
path: data/FreeForm_1416-*
- split: FreeForm_1228
path: data/FreeForm_1228-*
- split: FreeForm_1168
path: data/FreeForm_1168-*
- split: FreeForm_1353
path: data/FreeForm_1353-*
- split: FreeForm_1477
path: data/FreeForm_1477-*
- split: FreeForm_1105
path: data/FreeForm_1105-*
- split: FreeForm_1417
path: data/FreeForm_1417-*
- split: FreeForm_1229
path: data/FreeForm_1229-*
- split: FreeForm_1056
path: data/FreeForm_1056-*
- split: FreeForm_1354
path: data/FreeForm_1354-*
- split: FreeForm_1230
path: data/FreeForm_1230-*
- split: FreeForm_1057
path: data/FreeForm_1057-*
- split: FreeForm_1170
path: data/FreeForm_1170-*
- split: FreeForm_1291
path: data/FreeForm_1291-*
- split: FreeForm_1107
path: data/FreeForm_1107-*
- split: FreeForm_1419
path: data/FreeForm_1419-*
- split: FreeForm_1479
path: data/FreeForm_1479-*
- split: FreeForm_1231
path: data/FreeForm_1231-*
- split: FreeForm_1058
path: data/FreeForm_1058-*
- split: FreeForm_1171
path: data/FreeForm_1171-*
- split: FreeForm_1420
path: data/FreeForm_1420-*
- split: FreeForm_1232
path: data/FreeForm_1232-*
- split: FreeForm_1059
path: data/FreeForm_1059-*
- split: FreeForm_1293
path: data/FreeForm_1293-*
- split: FreeForm_1357
path: data/FreeForm_1357-*
- split: FreeForm_1481
path: data/FreeForm_1481-*
- split: FreeForm_1060
path: data/FreeForm_1060-*
- split: FreeForm_1294
path: data/FreeForm_1294-*
- split: FreeForm_1173
path: data/FreeForm_1173-*
- split: FreeForm_1358
path: data/FreeForm_1358-*
- split: FreeForm_1061
path: data/FreeForm_1061-*
- split: FreeForm_1234
path: data/FreeForm_1234-*
- split: FreeForm_1295
path: data/FreeForm_1295-*
- split: FreeForm_1359
path: data/FreeForm_1359-*
- split: FreeForm_1062
path: data/FreeForm_1062-*
- split: FreeForm_1296
path: data/FreeForm_1296-*
- split: FreeForm_1297
path: data/FreeForm_1297-*
- split: FreeForm_1112
path: data/FreeForm_1112-*
- split: FreeForm_1484
path: data/FreeForm_1484-*
- split: FreeForm_1064
path: data/FreeForm_1064-*
- split: FreeForm_1298
path: data/FreeForm_1298-*
- split: FreeForm_1113
path: data/FreeForm_1113-*
- split: FreeForm_1177
path: data/FreeForm_1177-*
- split: FreeForm_1362
path: data/FreeForm_1362-*
- split: FreeForm_1485
path: data/FreeForm_1485-*
- split: FreeForm_1363
path: data/FreeForm_1363-*
- split: FreeForm_1238
path: data/FreeForm_1238-*
- split: FreeForm_1066
path: data/FreeForm_1066-*
- split: FreeForm_1364
path: data/FreeForm_1364-*
- split: FreeForm_1300
path: data/FreeForm_1300-*
- split: FreeForm_1179
path: data/FreeForm_1179-*
- split: FreeForm_1365
path: data/FreeForm_1365-*
- split: FreeForm_1301
path: data/FreeForm_1301-*
- split: FreeForm_1180
path: data/FreeForm_1180-*
- split: FreeForm_1068
path: data/FreeForm_1068-*
- split: FreeForm_1116
path: data/FreeForm_1116-*
- split: FreeForm_1423
path: data/FreeForm_1423-*
- split: FreeForm_1366
path: data/FreeForm_1366-*
- split: FreeForm_1118
path: data/FreeForm_1118-*
- split: FreeForm_1242
path: data/FreeForm_1242-*
- split: FreeForm_1368
path: data/FreeForm_1368-*
- split: FreeForm_1183
path: data/FreeForm_1183-*
- split: FreeForm_1304
path: data/FreeForm_1304-*
- split: FreeForm_1490
path: data/FreeForm_1490-*
- split: FreeForm_1512
path: data/FreeForm_1512-*
- split: FreeForm_1244
path: data/FreeForm_1244-*
- split: FreeForm_1120
path: data/FreeForm_1120-*
- split: FreeForm_1370
path: data/FreeForm_1370-*
- split: FreeForm_1492
path: data/FreeForm_1492-*
- split: FreeForm_1245
path: data/FreeForm_1245-*
- split: FreeForm_1493
path: data/FreeForm_1493-*
- split: FreeForm_1307
path: data/FreeForm_1307-*
- split: FreeForm_1515
path: data/FreeForm_1515-*
- split: FreeForm_1246
path: data/FreeForm_1246-*
- split: FreeForm_1372
path: data/FreeForm_1372-*
- split: FreeForm_1122
path: data/FreeForm_1122-*
- split: FreeForm_1494
path: data/FreeForm_1494-*
- split: FreeForm_1516
path: data/FreeForm_1516-*
- split: FreeForm_1247
path: data/FreeForm_1247-*
- split: FreeForm_1373
path: data/FreeForm_1373-*
- split: FreeForm_1123
path: data/FreeForm_1123-*
- split: FreeForm_1424
path: data/FreeForm_1424-*
- split: FreeForm_1495
path: data/FreeForm_1495-*
- split: FreeForm_1188
path: data/FreeForm_1188-*
- split: FreeForm_1517
path: data/FreeForm_1517-*
- split: FreeForm_1124
path: data/FreeForm_1124-*
- split: FreeForm_1496
path: data/FreeForm_1496-*
- split: FreeForm_1189
path: data/FreeForm_1189-*
- split: FreeForm_1518
path: data/FreeForm_1518-*
- split: FreeForm_1375
path: data/FreeForm_1375-*
- split: FreeForm_1249
path: data/FreeForm_1249-*
- split: FreeForm_1125
path: data/FreeForm_1125-*
- split: FreeForm_1190
path: data/FreeForm_1190-*
- split: FreeForm_1519
path: data/FreeForm_1519-*
- split: FreeForm_1376
path: data/FreeForm_1376-*
- split: FreeForm_1250
path: data/FreeForm_1250-*
- split: FreeForm_1126
path: data/FreeForm_1126-*
- split: FreeForm_1520
path: data/FreeForm_1520-*
- split: FreeForm_1312
path: data/FreeForm_1312-*
- split: FreeForm_1498
path: data/FreeForm_1498-*
- split: FreeForm_1377
path: data/FreeForm_1377-*
- split: FreeForm_1251
path: data/FreeForm_1251-*
- split: FreeForm_1127
path: data/FreeForm_1127-*
- split: FreeForm_1521
path: data/FreeForm_1521-*
- split: FreeForm_1313
path: data/FreeForm_1313-*
- split: FreeForm_1378
path: data/FreeForm_1378-*
- split: FreeForm_1128
path: data/FreeForm_1128-*
- split: FreeForm_1522
path: data/FreeForm_1522-*
- split: FreeForm_1314
path: data/FreeForm_1314-*
- split: FreeForm_1523
path: data/FreeForm_1523-*
- split: FreeForm_1315
path: data/FreeForm_1315-*
- split: FreeForm_1380
path: data/FreeForm_1380-*
- split: FreeForm_1427
path: data/FreeForm_1427-*
- split: FreeForm_1524
path: data/FreeForm_1524-*
- split: FreeForm_1194
path: data/FreeForm_1194-*
- split: FreeForm_1381
path: data/FreeForm_1381-*
- split: FreeForm_1428
path: data/FreeForm_1428-*
- split: FreeForm_1255
path: data/FreeForm_1255-*
- split: FreeForm_1525
path: data/FreeForm_1525-*
- split: FreeForm_1195
path: data/FreeForm_1195-*
- split: FreeForm_1429
path: data/FreeForm_1429-*
- split: FreeForm_1382
path: data/FreeForm_1382-*
- split: FreeForm_1256
path: data/FreeForm_1256-*
- split: FreeForm_1526
path: data/FreeForm_1526-*
- split: FreeForm_1196
path: data/FreeForm_1196-*
- split: FreeForm_1430
path: data/FreeForm_1430-*
- split: FreeForm_1383
path: data/FreeForm_1383-*
- split: FreeForm_1257
path: data/FreeForm_1257-*
- split: FreeForm_1318
path: data/FreeForm_1318-*
- split: FreeForm_1504
path: data/FreeForm_1504-*
- split: FreeForm_1431
path: data/FreeForm_1431-*
- split: FreeForm_1384
path: data/FreeForm_1384-*
- split: FreeForm_1258
path: data/FreeForm_1258-*
- split: FreeForm_1528
path: data/FreeForm_1528-*
- split: FreeForm_1319
path: data/FreeForm_1319-*
- split: FreeForm_1505
path: data/FreeForm_1505-*
- split: FreeForm_1576
path: data/FreeForm_1576-*
- split: FreeForm_1432
path: data/FreeForm_1432-*
- split: FreeForm_1385
path: data/FreeForm_1385-*
- split: FreeForm_1701
path: data/FreeForm_1701-*
- split: FreeForm_1639
path: data/FreeForm_1639-*
- split: FreeForm_1530
path: data/FreeForm_1530-*
- split: FreeForm_1321
path: data/FreeForm_1321-*
- split: FreeForm_1507
path: data/FreeForm_1507-*
- split: FreeForm_1702
path: data/FreeForm_1702-*
- split: FreeForm_1434
path: data/FreeForm_1434-*
- split: FreeForm_1640
path: data/FreeForm_1640-*
- split: FreeForm_1531
path: data/FreeForm_1531-*
- split: FreeForm_1508
path: data/FreeForm_1508-*
- split: FreeForm_1435
path: data/FreeForm_1435-*
- split: FreeForm_1766
path: data/FreeForm_1766-*
- split: FreeForm_1579
path: data/FreeForm_1579-*
- split: FreeForm_1641
path: data/FreeForm_1641-*
- split: FreeForm_1827
path: data/FreeForm_1827-*
- split: FreeForm_1436
path: data/FreeForm_1436-*
- split: FreeForm_1704
path: data/FreeForm_1704-*
- split: FreeForm_1642
path: data/FreeForm_1642-*
- split: FreeForm_1828
path: data/FreeForm_1828-*
- split: FreeForm_1437
path: data/FreeForm_1437-*
- split: FreeForm_1581
path: data/FreeForm_1581-*
- split: FreeForm_1643
path: data/FreeForm_1643-*
- split: FreeForm_1534
path: data/FreeForm_1534-*
- split: FreeForm_1511
path: data/FreeForm_1511-*
- split: FreeForm_1707
path: data/FreeForm_1707-*
- split: FreeForm_1583
path: data/FreeForm_1583-*
- split: FreeForm_1770
path: data/FreeForm_1770-*
- split: FreeForm_1536
path: data/FreeForm_1536-*
- split: FreeForm_1891
path: data/FreeForm_1891-*
- split: FreeForm_1645
path: data/FreeForm_1645-*
- split: FreeForm_1831
path: data/FreeForm_1831-*
- split: FreeForm_1585
path: data/FreeForm_1585-*
- split: FreeForm_1538
path: data/FreeForm_1538-*
- split: FreeForm_1893
path: data/FreeForm_1893-*
- split: FreeForm_1442
path: data/FreeForm_1442-*
- split: FreeForm_1586
path: data/FreeForm_1586-*
- split: FreeForm_1648
path: data/FreeForm_1648-*
- split: FreeForm_1711
path: data/FreeForm_1711-*
- split: FreeForm_1443
path: data/FreeForm_1443-*
- split: FreeForm_1773
path: data/FreeForm_1773-*
- split: FreeForm_1540
path: data/FreeForm_1540-*
- split: FreeForm_1649
path: data/FreeForm_1649-*
- split: FreeForm_1712
path: data/FreeForm_1712-*
- split: FreeForm_1895
path: data/FreeForm_1895-*
- split: FreeForm_1444
path: data/FreeForm_1444-*
- split: FreeForm_1774
path: data/FreeForm_1774-*
- split: FreeForm_1541
path: data/FreeForm_1541-*
- split: FreeForm_1835
path: data/FreeForm_1835-*
- split: FreeForm_1588
path: data/FreeForm_1588-*
- split: FreeForm_1445
path: data/FreeForm_1445-*
- split: FreeForm_1896
path: data/FreeForm_1896-*
- split: FreeForm_1542
path: data/FreeForm_1542-*
- split: FreeForm_1775
path: data/FreeForm_1775-*
- split: FreeForm_1589
path: data/FreeForm_1589-*
- split: FreeForm_1714
path: data/FreeForm_1714-*
- split: FreeForm_1897
path: data/FreeForm_1897-*
- split: FreeForm_1543
path: data/FreeForm_1543-*
- split: FreeForm_1590
path: data/FreeForm_1590-*
- split: FreeForm_1715
path: data/FreeForm_1715-*
- split: FreeForm_1447
path: data/FreeForm_1447-*
- split: FreeForm_1591
path: data/FreeForm_1591-*
- split: FreeForm_1544
path: data/FreeForm_1544-*
- split: FreeForm_1838
path: data/FreeForm_1838-*
- split: FreeForm_1716
path: data/FreeForm_1716-*
- split: FreeForm_1448
path: data/FreeForm_1448-*
- split: FreeForm_1545
path: data/FreeForm_1545-*
- split: FreeForm_1592
path: data/FreeForm_1592-*
- split: FreeForm_1717
path: data/FreeForm_1717-*
- split: FreeForm_1953
path: data/FreeForm_1953-*
- split: FreeForm_1900
path: data/FreeForm_1900-*
- split: FreeForm_1779
path: data/FreeForm_1779-*
- split: FreeForm_1954
path: data/FreeForm_1954-*
- split: FreeForm_1901
path: data/FreeForm_1901-*
- split: FreeForm_1594
path: data/FreeForm_1594-*
- split: FreeForm_1719
path: data/FreeForm_1719-*
- split: FreeForm_1841
path: data/FreeForm_1841-*
- split: FreeForm_1548
path: data/FreeForm_1548-*
- split: FreeForm_1595
path: data/FreeForm_1595-*
- split: FreeForm_1720
path: data/FreeForm_1720-*
- split: FreeForm_1842
path: data/FreeForm_1842-*
- split: FreeForm_1656
path: data/FreeForm_1656-*
- split: FreeForm_1781
path: data/FreeForm_1781-*
- split: FreeForm_1721
path: data/FreeForm_1721-*
- split: FreeForm_1657
path: data/FreeForm_1657-*
- split: FreeForm_1782
path: data/FreeForm_1782-*
- split: FreeForm_1904
path: data/FreeForm_1904-*
- split: FreeForm_1597
path: data/FreeForm_1597-*
- split: FreeForm_1844
path: data/FreeForm_1844-*
- split: FreeForm_1957
path: data/FreeForm_1957-*
- split: FreeForm_1551
path: data/FreeForm_1551-*
- split: FreeForm_1905
path: data/FreeForm_1905-*
- split: FreeForm_1598
path: data/FreeForm_1598-*
- split: FreeForm_1723
path: data/FreeForm_1723-*
- split: FreeForm_1659
path: data/FreeForm_1659-*
- split: FreeForm_1552
path: data/FreeForm_1552-*
- split: FreeForm_1784
path: data/FreeForm_1784-*
- split: FreeForm_1599
path: data/FreeForm_1599-*
- split: FreeForm_1724
path: data/FreeForm_1724-*
- split: FreeForm_1660
path: data/FreeForm_1660-*
- split: FreeForm_1725
path: data/FreeForm_1725-*
- split: FreeForm_1960
path: data/FreeForm_1960-*
- split: FreeForm_1661
path: data/FreeForm_1661-*
- split: FreeForm_1554
path: data/FreeForm_1554-*
- split: FreeForm_1847
path: data/FreeForm_1847-*
- split: FreeForm_1726
path: data/FreeForm_1726-*
- split: FreeForm_1601
path: data/FreeForm_1601-*
- split: FreeForm_1908
path: data/FreeForm_1908-*
- split: FreeForm_1662
path: data/FreeForm_1662-*
- split: FreeForm_1848
path: data/FreeForm_1848-*
- split: FreeForm_1602
path: data/FreeForm_1602-*
- split: FreeForm_1909
path: data/FreeForm_1909-*
- split: FreeForm_1603
path: data/FreeForm_1603-*
- split: FreeForm_1910
path: data/FreeForm_1910-*
- split: FreeForm_1557
path: data/FreeForm_1557-*
- split: FreeForm_1604
path: data/FreeForm_1604-*
- split: FreeForm_1789
path: data/FreeForm_1789-*
- split: FreeForm_1558
path: data/FreeForm_1558-*
- split: FreeForm_1665
path: data/FreeForm_1665-*
- split: FreeForm_1605
path: data/FreeForm_1605-*
- split: FreeForm_1852
path: data/FreeForm_1852-*
- split: FreeForm_1791
path: data/FreeForm_1791-*
- split: FreeForm_1667
path: data/FreeForm_1667-*
- split: FreeForm_1607
path: data/FreeForm_1607-*
- split: FreeForm_1913
path: data/FreeForm_1913-*
- split: FreeForm_1732
path: data/FreeForm_1732-*
- split: FreeForm_1669
path: data/FreeForm_1669-*
- split: FreeForm_1609
path: data/FreeForm_1609-*
- split: FreeForm_1562
path: data/FreeForm_1562-*
- split: FreeForm_1915
path: data/FreeForm_1915-*
- split: FreeForm_1968
path: data/FreeForm_1968-*
- split: FreeForm_1734
path: data/FreeForm_1734-*
- split: FreeForm_1855
path: data/FreeForm_1855-*
- split: FreeForm_1670
path: data/FreeForm_1670-*
- split: FreeForm_1610
path: data/FreeForm_1610-*
- split: FreeForm_1969
path: data/FreeForm_1969-*
- split: FreeForm_1795
path: data/FreeForm_1795-*
- split: FreeForm_1671
path: data/FreeForm_1671-*
- split: FreeForm_1611
path: data/FreeForm_1611-*
- split: FreeForm_1917
path: data/FreeForm_1917-*
- split: FreeForm_1564
path: data/FreeForm_1564-*
- split: FreeForm_1970
path: data/FreeForm_1970-*
- split: FreeForm_1796
path: data/FreeForm_1796-*
- split: FreeForm_1857
path: data/FreeForm_1857-*
- split: FreeForm_1672
path: data/FreeForm_1672-*
- split: FreeForm_1565
path: data/FreeForm_1565-*
- split: FreeForm_1971
path: data/FreeForm_1971-*
- split: FreeForm_1673
path: data/FreeForm_1673-*
- split: FreeForm_1797
path: data/FreeForm_1797-*
- split: FreeForm_1972
path: data/FreeForm_1972-*
- split: FreeForm_1566
path: data/FreeForm_1566-*
- split: FreeForm_1674
path: data/FreeForm_1674-*
- split: FreeForm_1859
path: data/FreeForm_1859-*
- split: FreeForm_1738
path: data/FreeForm_1738-*
- split: FreeForm_1567
path: data/FreeForm_1567-*
- split: FreeForm_1799
path: data/FreeForm_1799-*
- split: FreeForm_1614
path: data/FreeForm_1614-*
- split: FreeForm_1860
path: data/FreeForm_1860-*
- split: FreeForm_1568
path: data/FreeForm_1568-*
- split: FreeForm_1740
path: data/FreeForm_1740-*
- split: FreeForm_1676
path: data/FreeForm_1676-*
- split: FreeForm_1974
path: data/FreeForm_1974-*
- split: FreeForm_1741
path: data/FreeForm_1741-*
- split: FreeForm_1923
path: data/FreeForm_1923-*
- split: FreeForm_1742
path: data/FreeForm_1742-*
- split: FreeForm_1617
path: data/FreeForm_1617-*
- split: FreeForm_1924
path: data/FreeForm_1924-*
- split: FreeForm_1743
path: data/FreeForm_1743-*
- split: FreeForm_1803
path: data/FreeForm_1803-*
- split: FreeForm_1679
path: data/FreeForm_1679-*
- split: FreeForm_1864
path: data/FreeForm_1864-*
- split: FreeForm_1744
path: data/FreeForm_1744-*
- split: FreeForm_1804
path: data/FreeForm_1804-*
- split: FreeForm_1865
path: data/FreeForm_1865-*
- split: FreeForm_1978
path: data/FreeForm_1978-*
- split: FreeForm_1745
path: data/FreeForm_1745-*
- split: FreeForm_1573
path: data/FreeForm_1573-*
- split: FreeForm_1805
path: data/FreeForm_1805-*
- split: FreeForm_1620
path: data/FreeForm_1620-*
- split: FreeForm_1681
path: data/FreeForm_1681-*
- split: FreeForm_1927
path: data/FreeForm_1927-*
- split: FreeForm_1979
path: data/FreeForm_1979-*
- split: FreeForm_1746
path: data/FreeForm_1746-*
- split: FreeForm_1574
path: data/FreeForm_1574-*
- split: FreeForm_1867
path: data/FreeForm_1867-*
- split: FreeForm_1621
path: data/FreeForm_1621-*
- split: FreeForm_1806
path: data/FreeForm_1806-*
- split: FreeForm_1747
path: data/FreeForm_1747-*
- split: FreeForm_1868
path: data/FreeForm_1868-*
- split: FreeForm_1807
path: data/FreeForm_1807-*
- split: FreeForm_1683
path: data/FreeForm_1683-*
- split: FreeForm_1748
path: data/FreeForm_1748-*
- split: FreeForm_1623
path: data/FreeForm_1623-*
- split: FreeForm_1749
path: data/FreeForm_1749-*
- split: FreeForm_1870
path: data/FreeForm_1870-*
- split: FreeForm_1624
path: data/FreeForm_1624-*
- split: FreeForm_1809
path: data/FreeForm_1809-*
- split: FreeForm_1750
path: data/FreeForm_1750-*
- split: FreeForm_1931
path: data/FreeForm_1931-*
- split: FreeForm_1983
path: data/FreeForm_1983-*
- split: FreeForm_1625
path: data/FreeForm_1625-*
- split: FreeForm_1871
path: data/FreeForm_1871-*
- split: FreeForm_1810
path: data/FreeForm_1810-*
- split: FreeForm_1751
path: data/FreeForm_1751-*
- split: FreeForm_1932
path: data/FreeForm_1932-*
- split: FreeForm_1686
path: data/FreeForm_1686-*
- split: FreeForm_1811
path: data/FreeForm_1811-*
- split: FreeForm_1872
path: data/FreeForm_1872-*
- split: FreeForm_1687
path: data/FreeForm_1687-*
- split: FreeForm_1627
path: data/FreeForm_1627-*
- split: FreeForm_1812
path: data/FreeForm_1812-*
- split: FreeForm_1688
path: data/FreeForm_1688-*
- split: FreeForm_1628
path: data/FreeForm_1628-*
- split: FreeForm_1986
path: data/FreeForm_1986-*
- split: FreeForm_1813
path: data/FreeForm_1813-*
- split: FreeForm_1630
path: data/FreeForm_1630-*
- split: FreeForm_1690
path: data/FreeForm_1690-*
- split: FreeForm_1988
path: data/FreeForm_1988-*
- split: FreeForm_1876
path: data/FreeForm_1876-*
- split: FreeForm_1756
path: data/FreeForm_1756-*
- split: FreeForm_1691
path: data/FreeForm_1691-*
- split: FreeForm_1937
path: data/FreeForm_1937-*
- split: FreeForm_1631
path: data/FreeForm_1631-*
- split: FreeForm_1878
path: data/FreeForm_1878-*
- split: FreeForm_1817
path: data/FreeForm_1817-*
- split: FreeForm_1633
path: data/FreeForm_1633-*
- split: FreeForm_1991
path: data/FreeForm_1991-*
- split: FreeForm_1694
path: data/FreeForm_1694-*
- split: FreeForm_1634
path: data/FreeForm_1634-*
- split: FreeForm_1940
path: data/FreeForm_1940-*
- split: FreeForm_1992
path: data/FreeForm_1992-*
- split: FreeForm_1695
path: data/FreeForm_1695-*
- split: FreeForm_1635
path: data/FreeForm_1635-*
- split: FreeForm_1880
path: data/FreeForm_1880-*
- split: FreeForm_1760
path: data/FreeForm_1760-*
- split: FreeForm_1696
path: data/FreeForm_1696-*
- split: FreeForm_1820
path: data/FreeForm_1820-*
- split: FreeForm_1636
path: data/FreeForm_1636-*
- split: FreeForm_1881
path: data/FreeForm_1881-*
- split: FreeForm_1761
path: data/FreeForm_1761-*
- split: FreeForm_1942
path: data/FreeForm_1942-*
- split: FreeForm_1697
path: data/FreeForm_1697-*
- split: FreeForm_1637
path: data/FreeForm_1637-*
- split: FreeForm_1882
path: data/FreeForm_1882-*
- split: FreeForm_1943
path: data/FreeForm_1943-*
- split: FreeForm_1762
path: data/FreeForm_1762-*
- split: FreeForm_1995
path: data/FreeForm_1995-*
- split: FreeForm_1883
path: data/FreeForm_1883-*
- split: FreeForm_1698
path: data/FreeForm_1698-*
- split: FreeForm_1822
path: data/FreeForm_1822-*
- split: FreeForm_1944
path: data/FreeForm_1944-*
- split: FreeForm_1884
path: data/FreeForm_1884-*
- split: FreeForm_1823
path: data/FreeForm_1823-*
- split: FreeForm_1945
path: data/FreeForm_1945-*
- split: FreeForm_1885
path: data/FreeForm_1885-*
- split: FreeForm_1700
path: data/FreeForm_1700-*
- split: FreeForm_1946
path: data/FreeForm_1946-*
- split: FreeForm_1886
path: data/FreeForm_1886-*
- split: FreeForm_1825
path: data/FreeForm_1825-*
- split: FreeForm_1947
path: data/FreeForm_1947-*
- split: FreeForm_1887
path: data/FreeForm_1887-*
- split: FreeForm_1826
path: data/FreeForm_1826-*
- split: FreeForm_1948
path: data/FreeForm_1948-*
- split: FreeForm_1888
path: data/FreeForm_1888-*
- split: FreeForm_1999
path: data/FreeForm_1999-*
- split: FreeForm_1949
path: data/FreeForm_1949-*
- split: FreeForm_1889
path: data/FreeForm_1889-*
- split: FreeForm_1950
path: data/FreeForm_1950-*
- split: FreeForm_1951
path: data/FreeForm_1951-*
- split: FreeForm_1952
path: data/FreeForm_1952-*
- split: FreeForm_538
path: data/FreeForm_538-*
- split: FreeForm_965
path: data/FreeForm_965-*
- split: FreeForm_539
path: data/FreeForm_539-*
- split: FreeForm_903
path: data/FreeForm_903-*
- split: FreeForm_540
path: data/FreeForm_540-*
- split: FreeForm_917
path: data/FreeForm_917-*
- split: FreeForm_541
path: data/FreeForm_541-*
- split: FreeForm_604
path: data/FreeForm_604-*
- split: FreeForm_818
path: data/FreeForm_818-*
- split: FreeForm_728
path: data/FreeForm_728-*
- split: FreeForm_606
path: data/FreeForm_606-*
- split: FreeForm_997
path: data/FreeForm_997-*
- split: FreeForm_562
path: data/FreeForm_562-*
- split: FreeForm_623
path: data/FreeForm_623-*
- split: FreeForm_1021
path: data/FreeForm_1021-*
- split: FreeForm_731
path: data/FreeForm_731-*
- split: FreeForm_940
path: data/FreeForm_940-*
- split: FreeForm_732
path: data/FreeForm_732-*
- split: FreeForm_878
path: data/FreeForm_878-*
- split: FreeForm_1067
path: data/FreeForm_1067-*
- split: FreeForm_669
path: data/FreeForm_669-*
- split: FreeForm_879
path: data/FreeForm_879-*
- split: FreeForm_1162
path: data/FreeForm_1162-*
- split: FreeForm_1099
path: data/FreeForm_1099-*
- split: FreeForm_670
path: data/FreeForm_670-*
- split: FreeForm_1172
path: data/FreeForm_1172-*
- split: FreeForm_1222
path: data/FreeForm_1222-*
- split: FreeForm_686
path: data/FreeForm_686-*
- split: FreeForm_1337
path: data/FreeForm_1337-*
- split: FreeForm_688
path: data/FreeForm_688-*
- split: FreeForm_1115
path: data/FreeForm_1115-*
- split: FreeForm_1265
path: data/FreeForm_1265-*
- split: FreeForm_1117
path: data/FreeForm_1117-*
- split: FreeForm_1418
path: data/FreeForm_1418-*
- split: FreeForm_1513
path: data/FreeForm_1513-*
- split: FreeForm_1360
path: data/FreeForm_1360-*
- split: FreeForm_1422
path: data/FreeForm_1422-*
- split: FreeForm_1514
path: data/FreeForm_1514-*
- split: FreeForm_1290
path: data/FreeForm_1290-*
- split: FreeForm_1487
path: data/FreeForm_1487-*
- split: FreeForm_1527
path: data/FreeForm_1527-*
- split: FreeForm_1299
path: data/FreeForm_1299-*
- split: FreeForm_1488
path: data/FreeForm_1488-*
- split: FreeForm_1529
path: data/FreeForm_1529-*
- split: FreeForm_1302
path: data/FreeForm_1302-*
- split: FreeForm_1371
path: data/FreeForm_1371-*
- split: FreeForm_1439
path: data/FreeForm_1439-*
- split: FreeForm_1638
path: data/FreeForm_1638-*
- split: FreeForm_1305
path: data/FreeForm_1305-*
- split: FreeForm_1644
path: data/FreeForm_1644-*
- split: FreeForm_1308
path: data/FreeForm_1308-*
- split: FreeForm_1497
path: data/FreeForm_1497-*
- split: FreeForm_1706
path: data/FreeForm_1706-*
- split: FreeForm_1830
path: data/FreeForm_1830-*
- split: FreeForm_1650
path: data/FreeForm_1650-*
- split: FreeForm_1537
path: data/FreeForm_1537-*
- split: FreeForm_1832
path: data/FreeForm_1832-*
- split: FreeForm_1776
path: data/FreeForm_1776-*
- split: FreeForm_1322
path: data/FreeForm_1322-*
- split: FreeForm_1833
path: data/FreeForm_1833-*
- split: FreeForm_1713
path: data/FreeForm_1713-*
- split: FreeForm_1553
path: data/FreeForm_1553-*
- split: FreeForm_1596
path: data/FreeForm_1596-*
- split: FreeForm_1663
path: data/FreeForm_1663-*
- split: FreeForm_1556
path: data/FreeForm_1556-*
- split: FreeForm_1783
path: data/FreeForm_1783-*
- split: FreeForm_1912
path: data/FreeForm_1912-*
- split: FreeForm_1559
path: data/FreeForm_1559-*
- split: FreeForm_1785
path: data/FreeForm_1785-*
- split: FreeForm_1666
path: data/FreeForm_1666-*
- split: FreeForm_1729
path: data/FreeForm_1729-*
- split: FreeForm_1788
path: data/FreeForm_1788-*
- split: FreeForm_1668
path: data/FreeForm_1668-*
- split: FreeForm_1918
path: data/FreeForm_1918-*
- split: FreeForm_1563
path: data/FreeForm_1563-*
- split: FreeForm_1675
path: data/FreeForm_1675-*
- split: FreeForm_1962
path: data/FreeForm_1962-*
- split: FreeForm_1792
path: data/FreeForm_1792-*
- split: FreeForm_1615
path: data/FreeForm_1615-*
- split: FreeForm_1846
path: data/FreeForm_1846-*
- split: FreeForm_1616
path: data/FreeForm_1616-*
- split: FreeForm_1850
path: data/FreeForm_1850-*
- split: FreeForm_1964
path: data/FreeForm_1964-*
- split: FreeForm_1801
path: data/FreeForm_1801-*
- split: FreeForm_1851
path: data/FreeForm_1851-*
- split: FreeForm_1965
path: data/FreeForm_1965-*
- split: FreeForm_1626
path: data/FreeForm_1626-*
- split: FreeForm_1853
path: data/FreeForm_1853-*
- split: FreeForm_1967
path: data/FreeForm_1967-*
- split: FreeForm_1692
path: data/FreeForm_1692-*
- split: FreeForm_1854
path: data/FreeForm_1854-*
- split: FreeForm_1975
path: data/FreeForm_1975-*
- split: FreeForm_1699
path: data/FreeForm_1699-*
- split: FreeForm_1755
path: data/FreeForm_1755-*
- split: FreeForm_1757
path: data/FreeForm_1757-*
- split: FreeForm_1763
path: data/FreeForm_1763-*
- split: FreeForm_1814
path: data/FreeForm_1814-*
- split: FreeForm_1816
path: data/FreeForm_1816-*
- split: FreeForm_1821
path: data/FreeForm_1821-*
- split: FreeForm_1856
path: data/FreeForm_1856-*
- split: FreeForm_1862
path: data/FreeForm_1862-*
- split: FreeForm_1873
path: data/FreeForm_1873-*
- split: FreeForm_1875
path: data/FreeForm_1875-*
- split: FreeForm_1877
path: data/FreeForm_1877-*
- split: FreeForm_1935
path: data/FreeForm_1935-*
- split: FreeForm_1936
path: data/FreeForm_1936-*
- split: FreeForm_1938
path: data/FreeForm_1938-*
- split: FreeForm_1939
path: data/FreeForm_1939-*
- split: FreeForm_1941
path: data/FreeForm_1941-*
- split: FreeForm_1977
path: data/FreeForm_1977-*
- split: FreeForm_1981
path: data/FreeForm_1981-*
- split: FreeForm_1984
path: data/FreeForm_1984-*
- split: FreeForm_1985
path: data/FreeForm_1985-*
- split: FreeForm_1987
path: data/FreeForm_1987-*
- split: FreeForm_1989
path: data/FreeForm_1989-*
- split: FreeForm_1990
path: data/FreeForm_1990-*
- split: FreeForm_1993
path: data/FreeForm_1993-*
- split: FreeForm_1996
path: data/FreeForm_1996-*
- split: FreeForm_2000
path: data/FreeForm_2000-*
tags:
- art
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
## Bibtex citation
```bibtex
@misc{zhao2024ultraeditinstructionbasedfinegrainedimage,
title={UltraEdit: Instruction-based Fine-Grained Image Editing at Scale},
author={Haozhe Zhao and Xiaojian Ma and Liang Chen and Shuzheng Si and Rujie Wu and Kaikai An and Peiyu Yu and Minjia Zhang and Qing Li and Baobao Chang},
year={2024},
eprint={2407.05282},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.05282},
}
``` |
amithm3/shrutilipi | amithm3 | 2024-08-31T05:27:21Z | 15,355 | 3 | [
"task_categories:automatic-speech-recognition",
"language:kn",
"language:sa",
"language:bn",
"language:pa",
"language:ml",
"language:gu",
"language:ta",
"language:te",
"language:hi",
"language:mr",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"audio",
"transcription",
"AI4Bharat",
"shrutilipi"
] | [
"automatic-speech-recognition"
] | 2024-04-08T17:53:27Z | null | ---
language:
- kn
- sa
- bn
- pa
- ml
- gu
- ta
- te
- hi
- mr
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- automatic-speech-recognition
pretty_name: AI4Bharat Shrutilipi ASR Dataset
dataset_info:
- config_name: bn
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 59658532357.726
num_examples: 302349
- name: validation
num_bytes: 6723169844.11
num_examples: 37602
- name: test
num_bytes: 7660623563.6
num_examples: 38740
download_size: 74278694994
dataset_size: 74042325765.436
- config_name: gu
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 55793674372.628
num_examples: 329931
- name: validation
num_bytes: 6293796356.189
num_examples: 40773
- name: test
num_bytes: 7165218289.408
num_examples: 40853
download_size: 78346523702
dataset_size: 69252689018.225
- config_name: hi
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 213699256456.296
num_examples: 877604
- name: validation
num_bytes: 27583551082.248
num_examples: 110692
- name: test
num_bytes: 25110580660.236
num_examples: 108492
download_size: 269912939092
dataset_size: 266393388198.78
- config_name: kn
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 54770494386.876
num_examples: 278766
- name: validation
num_bytes: 7864058142.98
num_examples: 34726
- name: test
num_bytes: 7572538417.28
num_examples: 35166
download_size: 74257809304
dataset_size: 70207090947.136
- config_name: ml
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 71262913087.942
num_examples: 467414
- name: validation
num_bytes: 7751159979.48
num_examples: 58985
- name: test
num_bytes: 8930337765.4
num_examples: 59230
download_size: 99439381074
dataset_size: 87944410832.82199
- config_name: mr
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 125894833883.753
num_examples: 505639
- name: validation
num_bytes: 14280421505.308
num_examples: 63407
- name: test
num_bytes: 15230198579.815
num_examples: 63397
download_size: 147608513634
dataset_size: 155405453968.876
- config_name: pa
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 11549437955.164
num_examples: 41874
- name: validation
num_bytes: 1317876276.359
num_examples: 5311
- name: test
num_bytes: 1453641173.132
num_examples: 5139
download_size: 13966090670
dataset_size: 14320955404.654999
- config_name: sa
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 6219394271.104
num_examples: 11532
- name: validation
num_bytes: 718650848.144
num_examples: 1408
- name: test
num_bytes: 752806235.026
num_examples: 1474
download_size: 7321556791
dataset_size: 7690851354.274
- config_name: ta
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 101739123587.681
num_examples: 429417
- name: validation
num_bytes: 12903430948.456
num_examples: 54012
- name: test
num_bytes: 12724306851.984
num_examples: 53827
download_size: 126128595816
dataset_size: 127366861388.12099
- config_name: te
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 33158344172.292
num_examples: 155322
- name: validation
num_bytes: 4085414503.579
num_examples: 19501
- name: test
num_bytes: 4173443926.076
num_examples: 19189
download_size: 43278403108
dataset_size: 41417202601.94701
configs:
- config_name: bn
data_files:
- split: train
path: data/bn/train-*
- split: validation
path: data/bn/validation-*
- split: test
path: data/bn/test-*
- config_name: gu
data_files:
- split: train
path: data/gu/train-*
- split: validation
path: data/gu/validation-*
- split: test
path: data/gu/test-*
- config_name: hi
data_files:
- split: train
path: data/hi/train-*
- split: validation
path: data/hi/validation-*
- split: test
path: data/hi/test-*
- config_name: kn
data_files:
- split: train
path: data/kn/train-*
- split: validation
path: data/kn/validation-*
- split: test
path: data/kn/test-*
- config_name: ml
data_files:
- split: train
path: data/ml/train-*
- split: validation
path: data/ml/validation-*
- split: test
path: data/ml/test-*
- config_name: mr
data_files:
- split: train
path: data/mr/train-*
- split: validation
path: data/mr/validation-*
- split: test
path: data/mr/test-*
- config_name: pa
data_files:
- split: train
path: data/pa/train-*
- split: validation
path: data/pa/validation-*
- split: test
path: data/pa/test-*
- config_name: sa
data_files:
- split: train
path: data/sa/train-*
- split: validation
path: data/sa/validation-*
- split: test
path: data/sa/test-*
- config_name: ta
data_files:
- split: train
path: data/ta/train-*
- split: validation
path: data/ta/validation-*
- split: test
path: data/ta/test-*
- config_name: te
data_files:
- split: train
path: data/te/train-*
- split: validation
path: data/te/validation-*
- split: test
path: data/te/test-*
tags:
- audio
- transcription
- AI4Bharat
- shrutilipi
---
|
NousResearch/hermes-function-calling-v1 | NousResearch | 2024-08-30T06:07:08Z | 2,284 | 290 | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:feature-extraction",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"question-answering",
"feature-extraction"
] | 2024-08-14T01:22:36Z | null |
---
license: apache-2.0
task_categories:
- text-generation
- question-answering
- feature-extraction
language:
- en
configs:
- config_name: func_calling_singleturn
data_files: "func-calling-singleturn.json"
default: true
- config_name: func_calling
data_files: "func-calling.json"
- config_name: glaive_func_calling
data_files: "glaive-function-calling-5k.json"
- config_name: json_mode_agentic
data_files: "json-mode-agentic.json"
- config_name: json_mode_singleturn
data_files: "json-mode-singleturn.json"
---

# Hermes Function-Calling V1
This dataset is the compilation of structured output and function calling data used in the Hermes 2 Pro series of models.
This repository contains a structured output dataset with function-calling conversations, json-mode, agentic json-mode and structured extraction samples, designed to train LLM models in performing function calls and returning structured output based on natural language instructions. The dataset features various conversational scenarios where AI agents are required to interpret queries and execute appropriate single or multiple function calls.
The synthetic data generation was led by @interstellarninja in collaboration with @NousResearch, @teknium, @THEODOROS and many others who provided guidance.
## Hermes Function Calling Standard
Hermes Function-calling Standard enables creation of LLM agents that are capable of executing API calls directly from user instructions. For instance, when asked to "find a flight from New York to Los Angeles for next Friday," a function-calling agent can interpret the request, generate the necessary function call (e.g., `search_flights`), and return the results. These agents significantly enhance the utility of AI by enabling direct interactions with APIs, making them invaluable in digital assistants across various domains.
For a complete useage guide of models trained on this data, see our github repo: https://github.com/NousResearch/Hermes-Function-Calling
## Repository Structure
There are 5 datasets that comprise the datamix,
**func-calling-singleturn.json** - Single turn function calls
**func-calling.json** - Multi-turn conversation function calls
**glaive-function-calling-5k.json** - Updated and cleaned Glaive Function Calling 5k dataset by Glaive AI
**json-mode-agentic.json** - Advanced JSON structured output samples
**json-mode-singleturn.json** - Single turn JSON structured output samples
The dataset has the following configs, used to load each file:
- ```func_calling_singleturn```
- ```func_calling```
- ```glaive_func_calling```
- ```json_mode_agentic```
- ```json_mode_singleturn```
### Inference Example Output
Here's an example of the inference output:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
<|im_start|>assistant
<tool_call>
{'arguments': {'symbol': 'TSLA'}, 'name': 'get_stock_fundamentals'}
</tool_call><|im_end|>
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.
```
### Hermes-2 Pro Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> [{'type': 'function', 'function': {'name': 'get_stock_fundamentals', 'description': 'Get fundamental data for a given stock symbol using yfinance API.', 'parameters': {'type': 'object', 'properties': {'symbol': {'type': 'string'}}, 'required': ['symbol']}}}] </tools> Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{'arguments': <args-dict>, 'name': <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function
```
<|im_start|>assistant
<tool_call>
{'arguments': {'symbol': 'TSLA'}, 'name': 'get_stock_fundamentals'}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, tool like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
### Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script jsonmode.py available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
## Dataset Structure
The dataset follows a sharegpt structure. This means it is a list of dictionaries, with each dictionary containing a new list of dicts called conversations. Each turn in a conversation has two dictionaries, a "from" field, which denotes the role of that turn, and a "value" field which contains the actual text.
### Tool-use with API response:
```json
{
"id": "753d8365-0e54-43b1-9514-3f9b819fd31c",
"conversations": [
{
"from": "system",
"value": "You are a function calling AI model. You are provided with function signatures within <tools> </tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.\n<tools>\n[{'type': 'function', 'function': {'name': 'search_flights', 'description': 'Searches for flights based on departure and destination cities, dates, class, and other preferences.', 'parameters': {'type': 'object', 'properties': {'departure_city': {'type': 'string', 'description': 'The city from which the flight will depart.'}, 'destination_city': {'type': 'string', 'description': 'The destination city for the flight.'}, 'departure_date': {'type': 'string', 'description': 'The departure date for the flight.', 'format': 'date'}, 'return_date': {'type': 'string', 'description': 'The return date for the flight.', 'format': 'date'}, 'class': {'type': 'string', 'description': 'The class of the flight ticket.', 'enum': ['economy', 'business', 'first']}, 'flexible_cancellation': {'type': 'boolean', 'description': 'Indicates if the search should filter for flights with flexible cancellation policies.'}}, 'required': ['departure_city', 'destination_city', 'departure_date', 'return_date', 'class']}}}]\n</tools>\nFor each function call return a json object with function name and arguments within <tool_call> </tool_call> tags with the following schema:\n<tool_call>\n{'arguments': <args-dict>, 'name': <function-name>}\n</tool_call>\n"
},
{
"from": "human",
"value": "I'm planning a kayaking trip and looking to book flights from Los Angeles to Auckland. My departure is scheduled for July 10th, 2023, and I intend to return on July 24th, 2023. I would prefer to travel in economy class and would also like the option to have flexible cancellation policies for the tickets due to the uncertain nature of outdoor activities. Could you please search for flights that meet these criteria and provide me with the available options?"
},
{
"from": "gpt",
"value": "<tool_call>\n{'arguments': {'departure_city': 'Los Angeles', 'destination_city': 'Auckland', 'departure_date': '2023-07-10', 'return_date': '2023-07-24', 'class': 'economy', 'flexible_cancellation': True}, 'name': 'search_flights'}\n</tool_call>\n"
},
{
"from": "tool",
"value": "<tool_response>\n{\"name\": \"search_flights\", \"content\": {\"flights\": [{\"flight_number\": \"AA118\", \"airline\": \"American Airlines\", \"departure_airport\": \"LAX\", \"departure_city\": \"Los Angeles\", \"departure_date\": \"2023-07-10T08:00:00Z\", \"departure_terminal\": \"4\", \"arrival_airport\": \"AKL\", \"arrival_city\": \"Auckland\", \"arrival_date\": \"2023-07-11T14:25:00Z\", \"arrival_terminal\": \"I\", \"duration\": \"17h 25m\", \"class\": \"Economy\", \"price\": \"$950\", \"flexible_cancellation\": true}, {\"flight_number\": \"UA796\", \"airline\": \"United Airlines\", \"departure_airport\": \"LAX\", \"departure_city\": \"Los Angeles\", \"departure_date\": \"2023-07-10T11:30:00Z\", \"departure_terminal\": \"7\", \"arrival_airport\": \"AKL\", \"arrival_city\": \"Auckland\", \"arrival_date\": \"2023-07-11T18:30:00Z\", \"arrival_terminal\": \"I\", \"duration\": \"16h 0m\", \"class\": \"Economy\", \"price\": \"$975\", \"flexible_cancellation\": true}, {\"flight_number\": \"NZ6\", \"airline\": \"Air New Zealand\", \"departure_airport\": \"LAX\", \"departure_city\": \"Los Angeles\", \"departure_date\": \"2023-07-10T21:00:00Z\", \"departure_terminal\": \"B\", \"arrival_airport\": \"AKL\", \"arrival_city\": \"Auckland\", \"arrival_date\": \"2023-07-12T05:20:00Z\", \"arrival_terminal\": \"I\", \"duration\": \"13h 20m\", \"class\": \"Economy\", \"price\": \"$1020\", \"flexible_cancellation\": true}], \"return_flights\": [{\"flight_number\": \"AA117\", \"airline\": \"American Airlines\", \"departure_airport\": \"AKL\", \"departure_city\": \"Auckland\", \"departure_date\": \"2023-07-24T14:10:00Z\", \"departure_terminal\": \"I\", \"arrival_airport\": \"LAX\", \"arrival_city\": \"Los Angeles\", \"arrival_date\": \"2023-07-24T06:00:00Z\", \"arrival_terminal\": \"4\", \"duration\": \"12h 50m\", \"class\": \"Economy\", \"price\": \"$950\", \"flexible_cancellation\": true}, {\"flight_number\": \"UA797\", \"airline\": \"United Airlines\", \"departure_airport\": \"AKL\", \"departure_city\": \"Auckland\", \"departure_date\": \"2023-07-24T19:15:00Z\", \"departure_terminal\": \"I\", \"arrival_airport\": \"LAX\", \"arrival_city\": \"Los Angeles\", \"arrival_date\": \"2023-07-24T10:15:00Z\", \"arrival_terminal\": \"7\", \"duration\": \"11h 0m\", \"class\": \"Economy\", \"price\": \"$975\", \"flexible_cancellation\": true}, {\"flight_number\": \"NZ5\", \"airline\": \"Air New Zealand\", \"departure_airport\": \"AKL\", \"departure_city\": \"Auckland\", \"departure_date\": \"2023-07-24T19:30:00Z\", \"departure_terminal\": \"I\", \"arrival_airport\": \"LAX\", \"arrival_city\": \"Los Angeles\", \"arrival_date\": \"2023-07-24T10:30:00Z\", \"arrival_terminal\": \"B\", \"duration\": \"12h 0m\", \"class\": \"Economy\", \"price\": \"$1020\", \"flexible_cancellation\": true}]}}\n</tool_response>\n"
},
{
"from": "gpt",
"value": "Here are the available flight options with flexible cancellation policies for your kayaking trip from Los Angeles to Auckland:\n\n**Outbound Flights (Los Angeles to Auckland):**\n1. **American Airlines AA118**\n - Departure: July 10, 2023, 8:00 AM from LAX Terminal 4\n - Arrival: July 11, 2023, 2:25 PM at AKL Terminal I\n - Duration: 17h 25m\n - Price: $950\n\n2. **United Airlines UA796**\n - Departure: July 10, 2023, 11:30 AM from LAX Terminal 7\n - Arrival: July 11, 2023, 6:30 PM at AKL Terminal I\n - Duration: 16h 0m\n - Price: $975\n\n3. **Air New Zealand NZ6**\n - Departure: July 10, 2023, 9:00 PM from LAX Terminal B\n - Arrival: July 12, 2023, 5:20 AM at AKL Terminal I\n - Duration: 13h 20m\n - Price: $1020\n\n**Return Flights (Auckland to Los Angeles):**\n1. **American Airlines AA117**\n - Departure: July 24, 2023, 2:10 PM from AKL Terminal I\n - Arrival: July 24, 2023, 6:00 AM at LAX Terminal 4\n - Duration: 12h 50m\n - Price: $950\n\n2. **United Airlines UA797**\n - Departure: July 24, 2023, 7:15 PM from AKL Terminal I\n - Arrival: July 24, 2023, 10:15 AM at LAX Terminal 7\n - Duration: 11h 0m\n - Price: $975\n\n3. **Air New Zealand NZ5**\n - Departure: July 24, 2023, 7:30 PM from AKL Terminal I\n - Arrival: July 24, 2023, 10:30 AM at LAX Terminal B\n - Duration: 12h 0m\n - Price: $1020\n\nAll the listed prices are in economy class with flexible cancellation policies. Please review the options and let me know if you would like to proceed with booking any of these flights or if you need further assistance."
}
],
"category": "E-commerce Platforms",
"subcategory": "Kayak",
"task": "Flight Search"
}
```
### Structured extraction with function-calling:
```json
{
"id": "afba1f0c-3212-467d-8594-d72a06da4d05",
"conversations": [
{
"from": "system",
"value": "You are an expert structured information extraction AI model. You will be provided with documents to extract information from. You are also provided with the json schema to output extracted information in the function signatures within XML tags <tools></tools>. Don't make assumptions about what values to plug into json schema. \n<tools>\n[{\"type\": \"function\", \"function\": {\"name\": \"ExpertQAExtractor\", \"description\": \"Extracts a list of questions that ask how a concept or information from the document can be applied to a real-world situation. These assess ability to apply knowledge.\", \"parameters\": {\"type\": \"object\", \"properties\": {\"application_questions\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"application_questions\"]}}}]\n</tools>\nFor each extraction function call return a json object with function name and arguments followed by a <tool_call> tag with the following schema:\n<tool_call>\n{'arguments': <args-dict>, 'name': <function-name>}\n</tool_call>"
},
{
"from": "human",
"value": "Can you help me extract queries from the following passage <passage> : A directed graph. \n weighted, by the way. If a pair of vertices in such a graph is attached \"both ways,\" then each of the two edges will have its own weight. \n Washington, DC \n Fredericksburg \n Richmond \n Virginia Beach \n 50 \n 60 100 \n 150 \n Figure 5.3: A weighted (and undirected) graph. \n**adjacent.** If two vertices have an edge between them, they are said to be adjacent. \n**connected.** The word **connected** has two meanings: it applies both to pairs of vertices and to entire graphs. We say that two vertices are connected if there is at least one path between them. Each vertex is therefore \"reachable\" from the other. In Figure 5.1, President and actor are connected, but Ford's Theatre and Civil War are not. \"Connected\" is also used to describe entire graphs, if _every_ node can be reached from all others. It's easy to see that Fig\n90 CHAPTER 5. STRUCTURES \n ure 5.3 is a connected graph, whereas Figure 5.1 is not (because Civil War and Gettysburg are isolated from the other nodes). It's not always trivial to determine whether a graph is connected, however: imagine a tangled morass of a million vertices, with ten million edges, and having to figure out whether or not every vertex is reachable from every other. (And if that seems unrealistically large, consider Facebook, which has over a billion nodes.) \n**degree.** A vertex's degree is simply the number of edges that connect to it. Virginia Beach has degree 2, and Fredericksburg \n3. In the case of a directed graph, we sometimes distinguish between the number of incoming arrows a vertex has (called its **in-degree** ) and the number of outgoing arrows (the **out- degree** ). Muhammad Ali had a higher out-degree (3) than in-degree (1) since he won most of the time. \n**cycle.** A cycle is a path that begins and ends at the same vertex.^2 In Figure 5.3, Richmond-to-Virginia Beach-to-Fredericksburgto-Richmond is a cycle. Any loop is a cycle all by itself. For directed graphs, the entire loop must comprise edges in the \"forward\" direction: no fair going backwards. In Figure 5.2, Frazier-to-Ali-to-Foreman-to-Frazier is a cycle, as is the simpler Ali-to-Frazier-to-Ali. \n**DAG (directed, acyclic graph).** One common use of graphs is to represent flows of dependencies, for instance the prerequisites that different college courses have for one another. Another example is project management workflows: the tasks needed to complete a project become vertices, and then the dependencies they have on one another become edges. The graph in Figure 5.4 shows the steps in making a batch of brownies, and how these steps depend on each other. The eggs have to be cracked before the ingredients can be mixed, \n(^2) We'll also say that a cycle can't repeat any edges or vertices along the way, so that it can't go back and forth repeatedly and pointlessly between two adjacent nodes. Some mathematicians call this a **simple cycle** to distinguish it from the more general **cycle** , but we'll just say that no cycles can repeat like this. \n5.1. GRAPHS 91 \n and the oven has to be preheated before baking, but the pan can be greased any old time, provided that it's done before pouring the brown goop into it. \n mix ingredients \n pour brown stuff in bowl \n crack two eggs measure 2 tbsp oil \n preheat oven \n bake for 30 mins \n grease pan \n pour into pan \n cool \n enjoy! \n Figure 5.4: A DAG. \n A graph of dependencies like this must be both directed and acyclic , or it wouldn't make sense. Directed, of course, means that task X can require task Y to be completed before it, without the reverse also being true. If they both depended on each other, we'd have an infinite loop, and no brownies could ever get baked! Acyclic means that no kind of cycle can exist in the graph, even one that goes through multiple vertices. Such a cycle would again result in an infinite loop, making the project hopeless. Imagine if there were an arrow from bake for 30 mins back to grease pan in Figure 5.4. Then, we'd have to grease the pan before pouring the goop into it, and we'd have to pour the goop before baking, but we'd also have to bake before greasing the pan! We'd be stuck right off the bat: there'd be no way to complete any of those tasks since they'd all indirectly depend on each other. A graph that is both directed and acyclic (and therefore free of these problems) is sometimes called a DAG for short. \n92 CHAPTER 5. STRUCTURES \n**Spatial positioning** \nOne important thing to understand about graphs is which aspects of a diagram are relevant. Specifically, _the spatial positioning of the vertices doesn't matter._ In Figure 5.2 we drew Muhammad Ali in the mid-upper left, and Sonny Liston in the extreme upper right. But this was an arbitrary choice, and irrelevant. More specifically, this isn't part of the information the diagram claims to represent. We could have positioned the vertices differently, as in Figure 5.5, and had _the same graph_. In both diagrams, there are the same vertices, and the same edges between them (check me). Therefore, these are mathematically the same graph. \nGeorge Foreman Sonny Liston (^) Muhammad Ali Joe Frazier Figure 5.5: A different look to **the same graph as Figure 5.2**. This might not seem surprising for the prize fighter graph, but for graphs like the MapQuest graph, which actually represent physical locations, it can seem jarring. In Figure 5.3 we could have drawn Richmond north of Fredericksburg, and Virginia Beach on the far west side of the diagram, and still had the same graph, provided that all the nodes and links were the same. Just remember that the spatial positioning is designed for human convenience, and isn't part of the mathematical information. It's similar to how there's no order to the elements of a set, even though when we specify a set extensionally, we have to list them in _some_ order to avoid writing all the element names on top of each other. On a graph diagram, we have to draw each vertex _somewhere_ , but where we put it is simply aesthetic. \n5.1. GRAPHS 93 \n**Relationship to sets** \nWe seem to have strayed far afield from sets with all this graph stuff. But actually, there are some important connections to be made to those original concepts. Recall the wizards set A from chapter 3 that we extended to contain { Harry, Ron, Hermione, Neville }. Now consider the following endorelation on A: \n (Harry, Ron) (Ron, Harry) (Ron, Hermione) (Ron, Neville) (Hermione, Hermione) (Neville, Harry) \nThis relation, and all it contains, is represented faithfully by the graph in Figure 5.6. The elements of A are the vertices of course, and each ordered pair of the relation is reflected in an edge of the graph. Can you see how _exactly_ the same information is represented by both forms? \n Hermione \n Ron Neville \n Harry \n Figure 5.6: A graph depicting a endorelation. \nFigure 5.6 is a directed graph, of course. What if it were an undirected graph? The answer is that the corresponding relation would be _symmetric_. An undirected graph implies that if there's an edge between two vertices, it goes \"both ways.\" This is really identical to saying a relation is symmetric: if an (x, y) is in the relation, then the corresponding (y, x) must also be. An example is Figure 5.7, which depicts the following symmetric relation: \n94 CHAPTER 5. STRUCTURES \n (Harry, Ron) (Ron, Harry) (Ron, Hermione) (Hermione, Ron) (Harry, Harry) (Neville, Neville) \n Harry Ron \n Hermione Neville \n Figure 5.7: A graph depicting a symmetric endorelation. \nNotice how the loops (edges from a node back to itself) in these diagrams represent ordered pairs in which both elements are the same. \nAnother connection between graphs and sets has to do with partitions. Figure 5.7 was not a connected graph: Neville couldn't be reached from any of the other nodes. Now consider: isn't a graph like this similar in some ways to a _partition_ of A -- namely, this one? \n { Harry, Ron, Hermione } and { Neville }. \nWe've simply partitioned the elements of A into the groups that are connected. If you remove the edge between Harry and Ron in that graph, you have: \n { Harry }, { Ron, Hermione }, and { Neville }. \nThen add one between Hermione and Neville, and now you have: \n5.1. GRAPHS 95 \n { Harry } and { Ron, Hermione, Neville }. \nIn other words, the \"connectedness\" of a graph can be represented precisely as a partition of the set of vertices. Each connected subset is in its own group, and every vertex is in one and only one group: therefore, these isolated groups are mutually exclusive and collectively exhaustive. Cool. \n**Graph traversal** \nIf you had a long list -- perhaps of phone numbers, names, or purchase orders -- and you needed to go through and do something to each element of the list -- dial all the numbers, scan the list for a certain name, add up all the orders -- it'd be pretty obvious how to do it. You just start at the top and work your way down. It might be tedious, but it's not confusing. \nIterating through the elements like this is called **traversing** the data structure. You want to make sure you encounter each element once (and only once) so you can do whatever needs to be done with it. It's clear how to traverse a list. But how to traverse a graph? There is no obvious \"first\" or \"last\" node, and each one is linked to potentially many others. And as we've seen, the vertices might not even _be_ fully connected, so a traversal path through all the nodes might not even exist. \nThere are two different ways of traversing a graph: breadth-first, and depth-first. They provide different ways of exploring the nodes, and as a side effect, each is able to discover whether the graph is connected or not. Let's look at each in turn. \n**Breadth-first traversal** \nWith **breadth-first traversal** , we begin at a starting vertex (it doesn't matter which one) and explore the graph cautiously and delicately. We probe equally deep in all directions, making sure we've looked a little ways down each possible path before exploring each of those paths a little further. \n96 CHAPTER 5. STRUCTURES \nTo do this, we use a very simple data structure called a **queue**. A queue is simply a list of nodes that are waiting in line. (In Britain, I'm told, instead of saying \"line up\" at the sandwich shop, they say \"queue up.\") When we enter a node into the queue at the tail end, we call it **enqueueing** the node, and when we remove one from the front, we call it **dequeueing** the node. The nodes in the middle patiently wait their turn to be dealt with, getting closer to the front every time the front node is dequeued. \nAn example of this data structure in action is shown in Figure 5.8. Note carefully that we always insert nodes at one end (on the right) and remove them from the _other_ end (the left). This means that the first item to be enqueued (in this case, the triangle) will be the first to be dequeued. \"Calls will be answered in the order they were received.\" This fact has given rise to another name for a queue: a \" **FIFO** ,\" which stands for \"first-in-first-out.\" \n Start with an empty queue: | Enqueue a triangle, and we have: |^ Enqueue a star, and we have: |^ Enqueue a heart, and we have: |^ Dequeue the triangle, and we have: | Enqueue a club, and we have: | Dequeue the star, and we have: | Dequeue the heart, and we have: | Dequeue the club. We're empty again: | \nFigure 5.8: A queue in action. The vertical bar marks the \"front of the line,\" and the elements are waiting to be dequeued in order from left to right. \nNow here's how we use a queue to traverse a graph breadth-first. We're going to start at a particular node, and put all of its adjacent nodes into a queue. This makes them all safely \"wait in line\" until we get around to exploring them. Then, we repeatedly take the first node in line, do whatever we need to do with it, and then put all of _its_ adjacent nodes in line. We keep doing this until the queue is empty. \n5.1. GRAPHS 97 \nNow it might have occurred to you that we can run into trouble if we encounter the same node multiple times while we're traversing. This can happen if the graph has a cycle: there will be more than one path to reach some nodes, and we could get stuck in an infinite loop if we're not careful. For this reason, we introduce the concept of **marking** nodes. This is kind of like leaving a trail of breadcrumbs: if we're ever about to explore a node, but find out it's marked, then we know we've already been there, and it's pointless to search it again. \nSo there </passage>"
},
{
"from": "gpt",
"value": "<tool_call>\\n{\"arguments\": {\"queries\": ['How can the concept of graph traversal be applied to real-world situations, such as project management workflows or social networks?', 'Can you think of any other real-world scenarios where understanding and effectively using spatial positioning in a graph would be important?', 'In what ways can the concept of marking nodes be useful in problem-solving or decision-making processes?'], \"name\": \"ExpertQAExtractor\"}}\\n</tool_call>"
}
],
"category": "Information Extraction",
"subcategory": "Json Schema",
"task": "Structured json schema extaction with function calling"
}
```
### JSON-mode with pydantic JSON schema
```json
{
"id": "57aa5f1b-6af3-42b8-9598-d48540277354",
"conversations": [
{
"from": "system",
"value": "You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{'title': 'MovieDatabaseEntry', 'type': 'object', 'properties': {'movieId': {'title': 'Movie ID', 'type': 'string'}, 'title': {'title': 'Title', 'type': 'string'}, 'genre': {'title': 'Genre', 'type': 'string'}, 'director': {'title': 'Director', 'type': 'string'}, 'cast': {'title': 'Cast', 'type': 'array', 'items': {'type': 'string'}}}, 'required': ['movieId', 'title', 'genre', 'director', 'cast']}\n</schema>\n"
},
{
"from": "human",
"value": "I'm organizing a movie database and need to create a structured JSON object for each film entry. The next movie I want to add is 'Interstellar', which has a unique movie ID 'INTSTL_2014'. The genre is 'Science Fiction', directed by Christopher Nolan. The main cast includes Matthew McConaughey as 'Cooper', Anne Hathaway as 'Brand', Jessica Chastain as 'Murph', and Michael Caine as 'Professor Brand'."
},
{
"from": "gpt",
"value": "{\"movieId\": \"INTSTL_2014\", \"title\": \"Interstellar\", \"genre\": \"Science Fiction\", \"director\": \"Christopher Nolan\", \"cast\": [\"Matthew McConaughey as Cooper\", \"Anne Hathaway as Brand\", \"Jessica Chastain as Murph\", \"Michael Caine as Professor Brand\"]}"
}
],
"category": "Media & Entertainment",
"subcategory": "Media",
"schema": "{\"title\": \"MovieDatabaseEntry\", \"type\": \"object\", \"properties\": {\"movieId\": {\"title\": \"Movie ID\", \"type\": \"string\"}, \"title\": {\"title\": \"Title\", \"type\": \"string\"}, \"genre\": {\"title\": \"Genre\", \"type\": \"string\"}, \"director\": {\"title\": \"Director\", \"type\": \"string\"}, \"cast\": {\"title\": \"Cast\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"movieId\", \"title\", \"genre\", \"director\", \"cast\"]}",
"__index_level_0__": 671
}
```
### JSON-mode with Agentic Schemas:
```json
{
"id": "c0618e9a-d832-4e2f-b9fe-28c8d8ca8007",
"conversations": [
{
"from": "system",
"value": "You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{'type': 'object', 'properties': {'agent_execution': {'type': 'object', 'properties': {'task_decomposition': {'type': 'string'}, 'action_retrieval': {'type': 'string'}, 'code_execution': {'type': 'string'}, 'error_analysis': {'type': 'string'}, 'code_amendment': {'type': 'string'}, 'execution_critique': {'type': 'string'}, 'task_replanning': {'type': 'string'}}, 'required': ['task_decomposition', 'action_retrieval', 'code_execution', 'error_analysis', 'code_amendment', 'execution_critique', 'task_replanning']}}}\n</schema>\n"
},
{
"from": "human",
"value": "As a software engineer, I'm currently working on integrating an agent execution framework into our system to automate complex tasks. I need to generate a JSON object that represents the process of executing a task where the agent decomposes the task into subtasks, retrieves the necessary actions from an action library, and executes code to fulfill the task requirements. The task at hand is to automate the process of data migration from one database to another, ensuring data integrity and handling any errors that may arise. The agent should also provide a critique of the task execution and replan the task if necessary. Specifically, the agent decomposes the data migration task into subtasks such as data extraction, data transformation, and data loading. It then retrieves actions like 'extract_data', 'transform_data', and 'load_data' from the action library. Following this, the agent executes the retrieved code snippets for each subtask, ensuring data integrity throughout the migration process. During this process, the agent analyzes any errors encountered during execution, identifying issues such as data format mismatches or connectivity problems. Based on the error analysis, the agent amends the code to handle exceptions and retry the subtask. After the execution, the agent provides a critique of the execution, highlighting successful completions and areas that required amendments. If a subtask cannot be completed successfully after amendments, the agent replans the task by decomposing it into alternative subtasks, thus ensuring a robust and adaptive approach to task automation."
},
{
"from": "gpt",
"value": "{\"agent_execution\": {\"task_decomposition\": \"The agent decomposes the data migration task into subtasks such as data extraction, data transformation, and data loading.\", \"action_retrieval\": \"The agent retrieves actions like 'extract_data', 'transform_data', and 'load_data' from the action library.\", \"code_execution\": \"The agent executes the retrieved code snippets for each subtask, ensuring data integrity throughout the migration process.\", \"error_analysis\": \"The agent analyzes any errors encountered during execution, identifying issues such as data format mismatches or connectivity problems.\", \"code_amendment\": \"The agent amends the code based on the error analysis to handle exceptions and retry the subtask.\", \"execution_critique\": \"The agent provides a critique of the execution, highlighting successful completions and areas that required amendments.\", \"task_replanning\": \"If a subtask cannot be completed successfully after amendments, the agent replans the task by decomposing it into alternative subtasks.\"}}"
}
],
"category": "Copilot Frameworks",
"subcategory": "Agent Execution",
"schema": "{\"type\": \"object\", \"properties\": {\"agent_execution\": {\"type\": \"object\", \"properties\": {\"task_decomposition\": {\"type\": \"string\"}, \"action_retrieval\": {\"type\": \"string\"}, \"code_execution\": {\"type\": \"string\"}, \"error_analysis\": {\"type\": \"string\"}, \"code_amendment\": {\"type\": \"string\"}, \"execution_critique\": {\"type\": \"string\"}, \"task_replanning\": {\"type\": \"string\"}}, \"required\": [\"task_decomposition\", \"action_retrieval\", \"code_execution\", \"error_analysis\", \"code_amendment\", \"execution_critique\", \"task_replanning\"]}}}"
}
```
# How to cite:
```bibtext
@misc{Hermes-Function-Calling-Dataset-V1,
url={https://huggingface.co/NousResearch/hermes-function-calling-v1}, c
title={Hermes-Function-Calling-Dataset-V1},
author={"interstellarninja", "Teknium"}
}
``` |
Major-TOM/Core-S1RTC | Major-TOM | 2024-08-29T16:17:53Z | 27,095 | 12 | [
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"modality:geospatial",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.12095",
"region:us",
"earth-observation",
"remote-sensing",
"sentinel-1",
"sar",
"synthethic-aperture-radar",
"satellite",
"geospatial"
] | [] | 2024-03-15T14:13:22Z | null | ---
license: cc-by-sa-4.0
tags:
- earth-observation
- remote-sensing
- sentinel-1
- sar
- synthethic-aperture-radar
- satellite
- geospatial
size_categories:
- 1M<n<10M
dataset_info:
- config_name: default
features:
- name: product_id
dtype: string
- name: grid_cell
dtype: string
- name: product_datetime
dtype: string
- name: thumbnail
dtype: image
- name: vv
dtype: binary
- name: vh
dtype: binary
configs:
- config_name: default
data_files: images/*.parquet
- config_name: metadata
data_files: metadata.parquet
---
# Core-S1RTC
Contains a global coverage of Sentinel-1 (RTC) patches, each of size 1,068 x 1,068 pixels.
| Source | Sensing Type | Number of Patches | Patch Size | Total Pixels |
|--------|--------------|-------------------|------------|--------------|
|Sentinel-1 RTC | Synthetic Aperture Radar |1,469,955|1,068 x 1,068 (10 m) | > 1.676 Trillion |
## Content
| Column | Details | Resolution |
|--------|---------|------------|
| VV | Received Linear Power in the VV Polarization | 10m |
| VH | Received Linear Power in the VV Polarization | 10m |
| thumbnail | Rescaled false colour<sup>1</sup> saved as png | 10m |
<sup>1</sup> False colour composites are made with decibel-scale values with red green and blue defined as ```R:VV G:VV+VH B:VH```. For each channel, a contrast-stretch is applied, transforming minimum-maximum to 0-255. This means bluer areas have relatively higher VH values, whilst brightness is a function of overall intensity. This is relative within each thumbnail because of the normalisation, and so cannot be compared across different samples.
## Spatial Coverage
This is a global monotemporal dataset. Nearly every piece of Earth captured by Sentinel-1 is contained at least once in this dataset (and only once, excluding some marginal overlaps). The coverage is about 35% lower than for Core Sentinel-2 dataset due to the sensor coverage limitations.
The following figure demonstrates the spatial coverage (only black pixels are absent):

## Example Use
Interface scripts are available at https://github.com/ESA-PhiLab/Major-TOM
Here's a sneak peek with a thumbnail image:
```python
from fsspec.parquet import open_parquet_file
import pyarrow.parquet as pq
from io import BytesIO
from PIL import Image
PARQUET_FILE = 'part_03900' # parquet number
ROW_INDEX = 42 # row number (about 500 per parquet)
url = "https://huggingface.co/datasets/Major-TOM/Core-S1RTC/resolve/main/images/{}.parquet".format(PARQUET_FILE)
with open_parquet_file(url,columns = ["thumbnail"]) as f:
with pq.ParquetFile(f) as pf:
first_row_group = pf.read_row_group(ROW_INDEX, columns=['thumbnail'])
stream = BytesIO(first_row_group['thumbnail'][0].as_py())
image = Image.open(stream)
```
## Cite
[](https://arxiv.org/abs/2402.12095/)
```latex
@inproceedings{Major_TOM,
title={Major TOM: Expandable Datasets for Earth Observation},
author={Alistair Francis and Mikolaj Czerkawski},
year={2024},
booktitle={IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium},
eprint={2402.12095},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Powered by [Φ-lab, European Space Agency (ESA) 🛰️](https://huggingface.co/ESA-philab) |
Omartificial-Intelligence-Space/Arab3M-Triplets | Omartificial-Intelligence-Space | 2024-08-29T12:10:58Z | 45 | 4 | [
"task_categories:sentence-similarity",
"language:ar",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"STS",
"Embeddings",
"Arabic"
] | [
"sentence-similarity"
] | 2024-07-11T10:24:46Z | 3 | ---
license: apache-2.0
extra_gated_fields:
Name: text
Affilation: text
Company: text
Country: country
Specific date: date_picker
I want to use this dataset for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to use this dataset for non-commercial use ONLY: checkbox
task_categories:
- sentence-similarity
language:
- ar
tags:
- STS
- Embeddings
- Arabic
pretty_name: Arab3M-Triplets
size_categories:
- 1M<n<10M
---
# Arab3M-Triplets
This dataset is designed for training and evaluating models using contrastive learning techniques, particularly in the context of natural language understanding. The dataset consists of triplets: an anchor sentence, a positive sentence, and a negative sentence. The goal is to encourage models to learn meaningful representations by distinguishing between semantically similar and dissimilar sentences.
## Dataset Overview
- **Format**: Parquet
- **Number of rows**: 3.03 million
- **Columns**:
- `anchor`: A sentence serving as the reference point.
- `positive`: A sentence that is semantically similar to the `anchor`.
- `negative`: A sentence that is semantically dissimilar to the `anchor`.
## Usage
This dataset can be used to train models for various NLP tasks, including:
- **Sentence Similarity**: Training models to identify sentences with similar meanings.
- **Contrastive Learning**: Teaching models to differentiate between semantically related and unrelated sentences.
- **Representation Learning**: Developing models that learn robust sentence embeddings.
### Loading the Dataset
You can load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset('Omartificial-Intelligence-Space/Arab3M-Triplets')
``` |
fixie-ai/covost2 | fixie-ai | 2024-08-27T20:58:08Z | 17,492 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-07-16T23:40:52Z | null | ---
dataset_info:
- config_name: ar_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 61607709.192
num_examples: 2283
- name: validation
num_bytes: 56223234.024
num_examples: 1758
- name: test
num_bytes: 54650910.41
num_examples: 1695
download_size: 160468333
dataset_size: 172481853.626
- config_name: ca_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4397026262.322
num_examples: 95854
- name: validation
num_bytes: 544108371.96
num_examples: 12730
- name: test
num_bytes: 604755238.63
num_examples: 12730
download_size: 4957773433
dataset_size: 5545889872.912
- config_name: cy_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 51478765.087
num_examples: 1241
- name: validation
num_bytes: 26992697.0
num_examples: 690
- name: test
num_bytes: 28772216.0
num_examples: 690
download_size: 102604972
dataset_size: 107243678.087
- config_name: de_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5680326209.222
num_examples: 127834
- name: validation
num_bytes: 631442490.202
num_examples: 13511
- name: test
num_bytes: 637042944.685
num_examples: 13511
download_size: 6490850158
dataset_size: 6948811644.108999
- config_name: en_ar
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14105902817.18
num_examples: 289430
- name: validation
num_bytes: 718527564.808
num_examples: 15531
- name: test
num_bytes: 729114452.301
num_examples: 15531
download_size: 13815709729
dataset_size: 15553544834.289001
- config_name: en_ca
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099092976.18
num_examples: 289430
- name: validation
num_bytes: 718171719.808
num_examples: 15531
- name: test
num_bytes: 728790610.301
num_examples: 15531
download_size: 13814365593
dataset_size: 15546055306.289001
- config_name: en_cy
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098487703.18
num_examples: 289430
- name: validation
num_bytes: 718141953.808
num_examples: 15531
- name: test
num_bytes: 728793811.301
num_examples: 15531
download_size: 13813953593
dataset_size: 15545423468.289001
- config_name: en_de
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099886814.18
num_examples: 289430
- name: validation
num_bytes: 718219105.808
num_examples: 15531
- name: test
num_bytes: 728857067.301
num_examples: 15531
download_size: 13815103686
dataset_size: 15546962987.289001
- config_name: en_et
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14096877545.18
num_examples: 289430
- name: validation
num_bytes: 718057559.808
num_examples: 15531
- name: test
num_bytes: 728710692.301
num_examples: 15531
download_size: 13813410823
dataset_size: 15543645797.289001
- config_name: en_fa
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14108661241.18
num_examples: 289430
- name: validation
num_bytes: 718670909.808
num_examples: 15531
- name: test
num_bytes: 729271000.301
num_examples: 15531
download_size: 13816798013
dataset_size: 15556603151.289001
- config_name: en_id
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098627451.18
num_examples: 289430
- name: validation
num_bytes: 718144327.808
num_examples: 15531
- name: test
num_bytes: 728802322.301
num_examples: 15531
download_size: 13813201260
dataset_size: 15545574101.289001
- config_name: en_ja
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14103911774.18
num_examples: 289430
- name: validation
num_bytes: 718409304.808
num_examples: 15531
- name: test
num_bytes: 729050991.301
num_examples: 15531
download_size: 13815875328
dataset_size: 15551372070.289001
- config_name: en_lv
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098703097.18
num_examples: 289430
- name: validation
num_bytes: 718152571.808
num_examples: 15531
- name: test
num_bytes: 728792572.301
num_examples: 15531
download_size: 13814849886
dataset_size: 15545648241.289001
- config_name: en_mn
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14113120657.18
num_examples: 289430
- name: validation
num_bytes: 718940418.808
num_examples: 15531
- name: test
num_bytes: 729461016.301
num_examples: 15531
download_size: 13819427515
dataset_size: 15561522092.289001
- config_name: en_sl
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097158381.18
num_examples: 289430
- name: validation
num_bytes: 718085673.808
num_examples: 15531
- name: test
num_bytes: 728705188.301
num_examples: 15531
download_size: 13813603812
dataset_size: 15543949243.289001
- config_name: en_sv-SE
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097728051.18
num_examples: 289430
- name: validation
num_bytes: 718093292.808
num_examples: 15531
- name: test
num_bytes: 728747422.301
num_examples: 15531
download_size: 13813332908
dataset_size: 15544568766.289001
- config_name: en_ta
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14135489205.18
num_examples: 289430
- name: validation
num_bytes: 720191394.808
num_examples: 15531
- name: test
num_bytes: 730578783.301
num_examples: 15531
download_size: 13825121271
dataset_size: 15586259383.289001
- config_name: en_tr
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098644786.18
num_examples: 289430
- name: validation
num_bytes: 718161996.808
num_examples: 15531
- name: test
num_bytes: 728786654.301
num_examples: 15531
download_size: 13814279798
dataset_size: 15545593437.289001
- config_name: en_zh-CN
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14095661460.18
num_examples: 289430
- name: validation
num_bytes: 717982705.808
num_examples: 15531
- name: test
num_bytes: 728655191.301
num_examples: 15531
download_size: 13812699892
dataset_size: 15542299357.289001
- config_name: es_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 630615357.241
num_examples: 13221
- name: test
num_bytes: 666447063.067
num_examples: 13221
- name: train
num_bytes: 3769457359.8
num_examples: 79015
download_size: 4531969416
dataset_size: 5066519780.108
- config_name: et_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 97124727.544
num_examples: 1782
- name: validation
num_bytes: 80290798.168
num_examples: 1576
- name: test
num_bytes: 81970364.51
num_examples: 1571
download_size: 257604448
dataset_size: 259385890.222
- config_name: fa_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1872724297.149
num_examples: 53949
- name: validation
num_bytes: 140067911.23
num_examples: 3445
- name: test
num_bytes: 149319550.35
num_examples: 3445
download_size: 1679853440
dataset_size: 2162111758.729
- config_name: fr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 632191608.84
num_examples: 14760
- name: test
num_bytes: 698178059.08
num_examples: 14760
- name: train
num_bytes: 8128016830.77
num_examples: 207374
download_size: 8900934523
dataset_size: 9458386498.69
- config_name: id_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 36136135.768
num_examples: 1243
- name: validation
num_bytes: 25058845.0
num_examples: 792
- name: test
num_bytes: 26577467.0
num_examples: 844
download_size: 86110062
dataset_size: 87772447.768
- config_name: it_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1517510665.568
num_examples: 31698
- name: validation
num_bytes: 422409218.1
num_examples: 8940
- name: test
num_bytes: 454569171.595
num_examples: 8951
download_size: 2125529183
dataset_size: 2394489055.2630005
- config_name: ja_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 38181610.624
num_examples: 1119
- name: validation
num_bytes: 24623052.0
num_examples: 635
- name: test
num_bytes: 25558787.0
num_examples: 684
download_size: 88228548
dataset_size: 88363449.624
- config_name: lv_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 66152116.328
num_examples: 2337
- name: validation
num_bytes: 32655276.0
num_examples: 1125
- name: test
num_bytes: 50997551.638
num_examples: 1629
download_size: 137700207
dataset_size: 149804943.96600002
- config_name: mn_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 87891433.547
num_examples: 2067
- name: validation
num_bytes: 77519039.943
num_examples: 1761
- name: test
num_bytes: 83667460.167
num_examples: 1759
download_size: 242638800
dataset_size: 249077933.657
- config_name: nl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 216102081.4
num_examples: 7108
- name: validation
num_bytes: 55386349.319
num_examples: 1699
- name: test
num_bytes: 60219179.711
num_examples: 1699
download_size: 320267264
dataset_size: 331707610.43
- config_name: pt_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 274723273.528
num_examples: 9158
- name: validation
num_bytes: 118345891.704
num_examples: 3318
- name: test
num_bytes: 166247624.001
num_examples: 4023
download_size: 540891735
dataset_size: 559316789.233
- config_name: ru_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 557219472.672
num_examples: 12112
- name: validation
num_bytes: 290218427.6
num_examples: 6110
- name: test
num_bytes: 312622838.0
num_examples: 6300
download_size: 1112848246
dataset_size: 1160060738.272
- config_name: sl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 55992153.0
num_examples: 1843
- name: validation
num_bytes: 15074155.0
num_examples: 509
- name: test
num_bytes: 10209711.0
num_examples: 360
download_size: 83863293
dataset_size: 81276019.0
- config_name: sv-SE_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 48298330.64
num_examples: 2160
- name: validation
num_bytes: 32544646.416
num_examples: 1349
- name: test
num_bytes: 46894324.615
num_examples: 1595
download_size: 121860373
dataset_size: 127737301.671
- config_name: ta_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 47757197.616
num_examples: 1358
- name: validation
num_bytes: 13670695.0
num_examples: 384
- name: test
num_bytes: 29891516.0
num_examples: 786
download_size: 87791516
dataset_size: 91319408.616
- config_name: tr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 119299427.798
num_examples: 3966
- name: validation
num_bytes: 52552534.232
num_examples: 1624
- name: test
num_bytes: 59106253.862
num_examples: 1629
download_size: 224018260
dataset_size: 230958215.89200002
- config_name: zh-CN_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 231018998.33
num_examples: 4843
- name: test
num_bytes: 243850956.45
num_examples: 4898
- name: train
num_bytes: 341425113.6
num_examples: 7085
download_size: 766660661
dataset_size: 816295068.38
configs:
- config_name: ar_en
data_files:
- split: train
path: ar_en/train-*
- split: validation
path: ar_en/validation-*
- split: test
path: ar_en/test-*
- config_name: ca_en
data_files:
- split: train
path: ca_en/train-*
- split: validation
path: ca_en/validation-*
- split: test
path: ca_en/test-*
- config_name: cy_en
data_files:
- split: train
path: cy_en/train-*
- split: validation
path: cy_en/validation-*
- split: test
path: cy_en/test-*
- config_name: de_en
data_files:
- split: train
path: de_en/train-*
- split: validation
path: de_en/validation-*
- split: test
path: de_en/test-*
- config_name: en_ar
data_files:
- split: train
path: en_ar/train-*
- split: validation
path: en_ar/validation-*
- split: test
path: en_ar/test-*
- config_name: en_ca
data_files:
- split: train
path: en_ca/train-*
- split: validation
path: en_ca/validation-*
- split: test
path: en_ca/test-*
- config_name: en_cy
data_files:
- split: train
path: en_cy/train-*
- split: validation
path: en_cy/validation-*
- split: test
path: en_cy/test-*
- config_name: en_de
data_files:
- split: train
path: en_de/train-*
- split: validation
path: en_de/validation-*
- split: test
path: en_de/test-*
- config_name: en_et
data_files:
- split: train
path: en_et/train-*
- split: validation
path: en_et/validation-*
- split: test
path: en_et/test-*
- config_name: en_fa
data_files:
- split: train
path: en_fa/train-*
- split: validation
path: en_fa/validation-*
- split: test
path: en_fa/test-*
- config_name: en_id
data_files:
- split: train
path: en_id/train-*
- split: validation
path: en_id/validation-*
- split: test
path: en_id/test-*
- config_name: en_ja
data_files:
- split: train
path: en_ja/train-*
- split: validation
path: en_ja/validation-*
- split: test
path: en_ja/test-*
- config_name: en_lv
data_files:
- split: train
path: en_lv/train-*
- split: validation
path: en_lv/validation-*
- split: test
path: en_lv/test-*
- config_name: en_mn
data_files:
- split: train
path: en_mn/train-*
- split: validation
path: en_mn/validation-*
- split: test
path: en_mn/test-*
- config_name: en_sl
data_files:
- split: train
path: en_sl/train-*
- split: validation
path: en_sl/validation-*
- split: test
path: en_sl/test-*
- config_name: en_sv-SE
data_files:
- split: train
path: en_sv-SE/train-*
- split: validation
path: en_sv-SE/validation-*
- split: test
path: en_sv-SE/test-*
- config_name: en_ta
data_files:
- split: train
path: en_ta/train-*
- split: validation
path: en_ta/validation-*
- split: test
path: en_ta/test-*
- config_name: en_tr
data_files:
- split: train
path: en_tr/train-*
- split: validation
path: en_tr/validation-*
- split: test
path: en_tr/test-*
- config_name: en_zh-CN
data_files:
- split: train
path: en_zh-CN/train-*
- split: validation
path: en_zh-CN/validation-*
- split: test
path: en_zh-CN/test-*
- config_name: es_en
data_files:
- split: validation
path: es_en/validation-*
- split: test
path: es_en/test-*
- split: train
path: es_en/train-*
- config_name: et_en
data_files:
- split: train
path: et_en/train-*
- split: validation
path: et_en/validation-*
- split: test
path: et_en/test-*
- config_name: fa_en
data_files:
- split: train
path: fa_en/train-*
- split: validation
path: fa_en/validation-*
- split: test
path: fa_en/test-*
- config_name: fr_en
data_files:
- split: validation
path: fr_en/validation-*
- split: test
path: fr_en/test-*
- split: train
path: fr_en/train-*
- config_name: id_en
data_files:
- split: train
path: id_en/train-*
- split: validation
path: id_en/validation-*
- split: test
path: id_en/test-*
- config_name: it_en
data_files:
- split: train
path: it_en/train-*
- split: validation
path: it_en/validation-*
- split: test
path: it_en/test-*
- config_name: ja_en
data_files:
- split: train
path: ja_en/train-*
- split: validation
path: ja_en/validation-*
- split: test
path: ja_en/test-*
- config_name: lv_en
data_files:
- split: train
path: lv_en/train-*
- split: validation
path: lv_en/validation-*
- split: test
path: lv_en/test-*
- config_name: mn_en
data_files:
- split: train
path: mn_en/train-*
- split: validation
path: mn_en/validation-*
- split: test
path: mn_en/test-*
- config_name: nl_en
data_files:
- split: train
path: nl_en/train-*
- split: validation
path: nl_en/validation-*
- split: test
path: nl_en/test-*
- config_name: pt_en
data_files:
- split: train
path: pt_en/train-*
- split: validation
path: pt_en/validation-*
- split: test
path: pt_en/test-*
- config_name: ru_en
data_files:
- split: train
path: ru_en/train-*
- split: validation
path: ru_en/validation-*
- split: test
path: ru_en/test-*
- config_name: sl_en
data_files:
- split: train
path: sl_en/train-*
- split: validation
path: sl_en/validation-*
- split: test
path: sl_en/test-*
- config_name: sv-SE_en
data_files:
- split: train
path: sv-SE_en/train-*
- split: validation
path: sv-SE_en/validation-*
- split: test
path: sv-SE_en/test-*
- config_name: ta_en
data_files:
- split: train
path: ta_en/train-*
- split: validation
path: ta_en/validation-*
- split: test
path: ta_en/test-*
- config_name: tr_en
data_files:
- split: train
path: tr_en/train-*
- split: validation
path: tr_en/validation-*
- split: test
path: tr_en/test-*
- config_name: zh-CN_en
data_files:
- split: validation
path: zh-CN_en/validation-*
- split: test
path: zh-CN_en/test-*
- split: train
path: zh-CN_en/train-*
---
This is a partial copy of [CoVoST2](https://huggingface.co/datasets/facebook/covost2) dataset.
The main difference is that the audio data is included in the dataset, which makes usage easier and allows browsing the samples using HF Dataset Viewer.
The limitation of this method is that all audio samples of the `EN_XX` subsets are duplicated, as such the size of the dataset is larger.
As such, not all the data is included: Only the `validation` and `test` subsets are available.
From the `XX_EN` subsets, only `fr`, `es`, and `zh-CN` are included. |
google/fleurs | google | 2024-08-25T05:03:32Z | 27,911 | 287 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:afr",
"language:amh",
"language:ara",
"language:asm",
"language:ast",
"language:azj",
"language:bel",
"language:ben",
"language:bos",
"language:cat",
"language:ceb",
"language:cmn",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:spa",
"language:est",
"language:fas",
"language:ful",
"language:fin",
"language:tgl",
"language:fra",
"language:gle",
"language:glg",
"language:guj",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ind",
"language:ibo",
"language:isl",
"language:ita",
"language:jpn",
"language:jav",
"language:kat",
"language:kam",
"language:kea",
"language:kaz",
"language:khm",
"language:kan",
"language:kor",
"language:ckb",
"language:kir",
"language:ltz",
"language:lug",
"language:lin",
"language:lao",
"language:lit",
"language:luo",
"language:lav",
"language:mri",
"language:mkd",
"language:mal",
"language:mon",
"language:mar",
"language:msa",
"language:mlt",
"language:mya",
"language:nob",
"language:npi",
"language:nld",
"language:nso",
"language:nya",
"language:oci",
"language:orm",
"language:ory",
"language:pan",
"language:pol",
"language:pus",
"language:por",
"language:ron",
"language:rus",
"language:bul",
"language:snd",
"language:slk",
"language:slv",
"language:sna",
"language:som",
"language:srp",
"language:swe",
"language:swh",
"language:tam",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:ukr",
"language:umb",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yor",
"language:yue",
"language:zul",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"arxiv:2205.12446",
"arxiv:2106.03193",
"region:us",
"speech-recognition"
] | [
"automatic-speech-recognition"
] | 2022-04-19T10:25:58Z | null | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from
10+ language families, 3 different domains and 4 task families: speech recognition,
translation, classification and retrieval.'
tags:
- speech-recognition
---
# FLEURS
## Dataset Description
- **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- **Paper:** [FLEURS: Few-shot Learning Evaluation of
Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
- **Total amount of disk used:** ca. 350 GB
Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193).
We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## How to use & Supported Tasks
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi):
```python
from datasets import load_dataset
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
fleurs = load_dataset("google/fleurs", "hi_in", split="train", streaming=True)
print(next(iter(fleurs)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
batch_sampler = BatchSampler(RandomSampler(fleurs), batch_size=32, drop_last=False)
dataloader = DataLoader(fleurs, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
dataloader = DataLoader(fleurs, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
Fine-tune your own Language Identification models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
### 1. Speech Recognition (ASR)
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
### 2. Language Identification
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/fleurs", "all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 3. Retrieval
Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
We show detailed information the example configurations `af_za` of the dataset.
All other configurations have the same structure.
### Data Instances
**af_za**
- Size of downloaded dataset files: 1.47 GB
- Size of the generated dataset: 1 MB
- Total amount of disk used: 1.47 GB
An example of a data instance of the config `af_za` looks as follows:
```
{'id': 91,
'num_samples': 385920,
'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,
-1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32),
'sampling_rate': 16000},
'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'gender': 0,
'lang_id': 0,
'language': 'Afrikaans',
'lang_group_id': 3}
```
### Data Fields
The data fields are the same among all splits.
- **id** (int): ID of audio sample
- **num_samples** (int): Number of float values
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **raw_transcription** (str): The non-normalized transcription of the audio file
- **transcription** (str): Transcription of the audio file
- **gender** (int): Class id of gender
- **lang_id** (int): Class id of language
- **lang_group_id** (int): Class id of language group
### Data Splits
Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples.
## Dataset Creation
We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for
train, dev and test respectively.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.
### Other Known Limitations
The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
You can access the FLEURS paper at https://arxiv.org/abs/2205.12446.
Please cite the paper when referencing the FLEURS corpus as:
```
@article{fleurs2022arxiv,
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
journal={arXiv preprint arXiv:2205.12446},
url = {https://arxiv.org/abs/2205.12446},
year = {2022},
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.
|
gksriharsha/chitralekha | gksriharsha | 2024-08-23T23:00:03Z | 160,664 | 4 | [
"task_categories:image-to-text",
"language:te",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3403",
"region:us"
] | [
"image-to-text"
] | 2023-11-29T14:31:24Z | null | ---
dataset_info:
- config_name: Dhurjati
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1298445060.3780885
num_examples: 475834
- name: validation
num_bytes: 432816839.3109558
num_examples: 158612
- name: test
num_bytes: 432816839.3109558
num_examples: 158612
download_size: 2214924048
dataset_size: 2164078739
- config_name: Gidugu
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1282865192.8855712
num_examples: 476265
- name: validation
num_bytes: 427624424.55721444
num_examples: 158756
- name: test
num_bytes: 427624424.55721444
num_examples: 158756
download_size: 2189311335
dataset_size: 2138114042.0000002
- config_name: Gurajada
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1387146264.0840201
num_examples: 474742
- name: validation
num_bytes: 462384035.9579899
num_examples: 158248
- name: test
num_bytes: 462384035.9579899
num_examples: 158248
download_size: 2343396240
dataset_size: 2311914336
- config_name: Mallanna
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1501113970.3809116
num_examples: 476159
- name: validation
num_bytes: 500372374.30954427
num_examples: 158720
- name: test
num_bytes: 500372374.30954427
num_examples: 158720
download_size: 2502257967
dataset_size: 2501858719
- config_name: Mandali-Regular
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1473975690.6129284
num_examples: 472433
- name: validation
num_bytes: 491326270.19353586
num_examples: 157478
- name: test
num_bytes: 491326270.19353586
num_examples: 157478
download_size: 2457756020
dataset_size: 2456628231
- config_name: NATS
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1356797141.105923
num_examples: 473392
- name: validation
num_bytes: 452267624.4470385
num_examples: 157798
- name: test
num_bytes: 452267624.4470385
num_examples: 157798
download_size: 2303879039
dataset_size: 2261332390
- config_name: NTR
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1574367624.5834982
num_examples: 473991
- name: validation
num_bytes: 524792529.7082509
num_examples: 157998
- name: test
num_bytes: 524792529.7082509
num_examples: 157998
download_size: 2615211115
dataset_size: 2623952684
- config_name: NotoSansTelugu-Bold
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1752162695.265523
num_examples: 476930
- name: validation
num_bytes: 584055456.3672385
num_examples: 158977
- name: test
num_bytes: 584055456.3672385
num_examples: 158977
download_size: 2904018741
dataset_size: 2920273608
- config_name: NotoSansTelugu-Regular
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1718034768.894641
num_examples: 478227
- name: validation
num_bytes: 572678256.2982136
num_examples: 159409
- name: test
num_bytes: 572681848.8071454
num_examples: 159410
download_size: 2848500410
dataset_size: 2863394874
- config_name: NotoSansTeluguUI-Bold
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1750230388.4259622
num_examples: 476148
- name: validation
num_bytes: 583413805.2870189
num_examples: 158717
- name: test
num_bytes: 583413805.2870189
num_examples: 158717
download_size: 2901117051
dataset_size: 2917057999
- config_name: NotoSansTeluguUI-Regular
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1723039562.5891204
num_examples: 477735
- name: validation
num_bytes: 574346520.8630401
num_examples: 159245
- name: test
num_bytes: 574350127.5478394
num_examples: 159246
download_size: 2856472137
dataset_size: 2871736211
- config_name: NotoSerifTelugu-VariableFont_wght
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1615401522.415037
num_examples: 475403
- name: validation
num_bytes: 538468306.7924815
num_examples: 158468
- name: test
num_bytes: 538468306.7924815
num_examples: 158468
download_size: 2684117723
dataset_size: 2692338136
- config_name: Pothana2000
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1533893192.4
num_examples: 474486
- name: validation
num_bytes: 511297730.8
num_examples: 158162
- name: test
num_bytes: 511297730.8
num_examples: 158162
download_size: 2546261970
dataset_size: 2556488654
- config_name: Ramabhadra1
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1356669137.4
num_examples: 477120
- name: validation
num_bytes: 452223045.8
num_examples: 159040
- name: test
num_bytes: 452223045.8
num_examples: 159040
download_size: 2293250323
dataset_size: 2261115229
- config_name: RamaneeyaWin
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1569779237.530234
num_examples: 475390
- name: validation
num_bytes: 523261947.23488295
num_examples: 158464
- name: test
num_bytes: 523261947.23488295
num_examples: 158464
download_size: 2609295282
dataset_size: 2616303132
- config_name: Ramaraja-Regular
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1410891933.3096473
num_examples: 472584
- name: validation
num_bytes: 470297311.1032158
num_examples: 157528
- name: test
num_bytes: 470300296.5871368
num_examples: 157529
download_size: 2371358480
dataset_size: 2351489541
- config_name: Suguna
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1446982722.6
num_examples: 477066
- name: validation
num_bytes: 482327574.2
num_examples: 159022
- name: test
num_bytes: 482327574.2
num_examples: 159022
download_size: 2415257732
dataset_size: 2411637871
- config_name: Suranna
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1503599948.8440886
num_examples: 474592
- name: validation
num_bytes: 501202095.07795566
num_examples: 158198
- name: test
num_bytes: 501202095.07795566
num_examples: 158198
download_size: 2506994404
dataset_size: 2506004139
- config_name: Suravara_Samhita
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1558595237.4
num_examples: 474537
- name: validation
num_bytes: 519531745.8
num_examples: 158179
- name: test
num_bytes: 519531745.8
num_examples: 158179
download_size: 2585415226
dataset_size: 2597658729
- config_name: Suravara_Swarna
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1486359795.6
num_examples: 475680
- name: validation
num_bytes: 495453265.2
num_examples: 158560
- name: test
num_bytes: 495453265.2
num_examples: 158560
download_size: 2475591226
dataset_size: 2477266326
- config_name: Suravara_Swarna_bold
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1720811516.4
num_examples: 478134
- name: validation
num_bytes: 573603838.8
num_examples: 159378
- name: test
num_bytes: 573603838.8
num_examples: 159378
download_size: 2850593671
dataset_size: 2868019194
- config_name: Suravara_Swarna_italic
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1447766013.2634926
num_examples: 479031
- name: validation
num_bytes: 482591693.36825377
num_examples: 159678
- name: test
num_bytes: 482591693.36825377
num_examples: 159678
download_size: 2422412589
dataset_size: 2412949400
- config_name: Suravaram
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1429147481.2187955
num_examples: 477026
- name: validation
num_bytes: 476383492.3906023
num_examples: 159009
- name: test
num_bytes: 476383492.3906023
num_examples: 159009
download_size: 4809669330
dataset_size: 2381914466
- config_name: TLOTAmmaBI_ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2460661581.730414
num_examples: 475658
- name: validation
num_bytes: 820222251.6347929
num_examples: 158553
- name: test
num_bytes: 820222251.6347929
num_examples: 158553
download_size: 4096792615
dataset_size: 4101106084.9999995
- config_name: TLOTAmmaB_ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2416168779.915695
num_examples: 477459
- name: validation
num_bytes: 805389593.3052317
num_examples: 159153
- name: test
num_bytes: 805394653.7790732
num_examples: 159154
download_size: 4021858976
dataset_size: 4026953027
- config_name: TLOTAmmaI_ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2477661003.4358616
num_examples: 472795
- name: validation
num_bytes: 825890494.7820691
num_examples: 157599
- name: test
num_bytes: 825890494.7820691
num_examples: 157599
download_size: 4125584249
dataset_size: 4129441993
- config_name: TLOTAmmaN_ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2433593183.980863
num_examples: 476750
- name: validation
num_bytes: 811199429.5095686
num_examples: 158917
- name: test
num_bytes: 811199429.5095686
num_examples: 158917
download_size: 4050885257
dataset_size: 4055992043.0000005
- config_name: TLOTAmrutaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2653406725.2
num_examples: 475320
- name: validation
num_bytes: 884468908.4
num_examples: 158440
- name: test
num_bytes: 884468908.4
num_examples: 158440
download_size: 4422612970
dataset_size: 4422344542
- config_name: TLOTAmrutaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2636543466.6297607
num_examples: 474288
- name: validation
num_bytes: 878847822.2099203
num_examples: 158096
- name: test
num_bytes: 878853381.1603189
num_examples: 158097
download_size: 4393963744
dataset_size: 4394244670
- config_name: TLOTAtreyaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1920072146.440807
num_examples: 476571
- name: validation
num_bytes: 640024048.8136024
num_examples: 158857
- name: test
num_bytes: 640028077.7455903
num_examples: 158858
download_size: 3187176178
dataset_size: 3200124273
- config_name: TLOTAtreyaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1468763709.6
num_examples: 477087
- name: validation
num_bytes: 489587903.2
num_examples: 159029
- name: test
num_bytes: 489587903.2
num_examples: 159029
download_size: 2463733719
dataset_size: 2447939516
- config_name: TLOTAtreyaI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2031521130
num_examples: 478089
- name: validation
num_bytes: 677173710
num_examples: 159363
- name: test
num_bytes: 677173710
num_examples: 159363
download_size: 3373208127
dataset_size: 3385868550
- config_name: TLOTAtreyaN_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1499893860.1101012
num_examples: 475416
- name: validation
num_bytes: 499967774.9449494
num_examples: 158473
- name: test
num_bytes: 499967774.9449494
num_examples: 158473
download_size: 2503688455
dataset_size: 2499829410
- config_name: TLOTChandanaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2570736110.0222764
num_examples: 477280
- name: validation
num_bytes: 856915627.4888619
num_examples: 159094
- name: test
num_bytes: 856915627.4888619
num_examples: 159094
download_size: 8582881701
dataset_size: 4284567365.000001
- config_name: TLOTChandanaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2573995646.187106
num_examples: 477970
- name: validation
num_bytes: 858002138.906447
num_examples: 159324
- name: test
num_bytes: 858002138.906447
num_examples: 159324
download_size: 4287747645
dataset_size: 4289999924
- config_name: TLOTDevaI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2480881369.494744
num_examples: 474412
- name: validation
num_bytes: 826963942.7526281
num_examples: 158138
- name: test
num_bytes: 826963942.7526281
num_examples: 158138
download_size: 4131458823
dataset_size: 4134809255
- config_name: TLOTDevaN_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2500855833.517619
num_examples: 477159
- name: validation
num_bytes: 833618611.1725397
num_examples: 159053
- name: test
num_bytes: 833623852.309841
num_examples: 159054
download_size: 4164760790
dataset_size: 4168098297
- config_name: TLOTDraupadiBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2323911850.2
num_examples: 476610
- name: validation
num_bytes: 774637283.4
num_examples: 158870
- name: test
num_bytes: 774637283.4
num_examples: 158870
download_size: 3866617083
dataset_size: 3873186417
- config_name: TLOTDraupadiB_ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2307940549.6171513
num_examples: 479856
- name: validation
num_bytes: 769318326.1914245
num_examples: 159953
- name: test
num_bytes: 769318326.1914245
num_examples: 159953
download_size: 3839262612
dataset_size: 3846577202
- config_name: TLOTDraupadiI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2544743977.8577175
num_examples: 476149
- name: validation
num_bytes: 848251555.5711412
num_examples: 158717
- name: test
num_bytes: 848251555.5711412
num_examples: 158717
download_size: 4239804725
dataset_size: 4241247089
- config_name: TLOTDraupadiN_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2541474368.49558
num_examples: 475408
- name: validation
num_bytes: 847161686.7522099
num_examples: 158470
- name: test
num_bytes: 847161686.7522099
num_examples: 158470
download_size: 4234310229
dataset_size: 4235797742
- config_name: TLOTGolkondaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2389702278.805238
num_examples: 474540
- name: validation
num_bytes: 796572462.0973812
num_examples: 158181
- name: test
num_bytes: 796572462.0973812
num_examples: 158181
download_size: 3977928852
dataset_size: 3982847203
- config_name: TLOTGolkondaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2389122371.711336
num_examples: 475805
- name: validation
num_bytes: 796375797.6443319
num_examples: 158602
- name: test
num_bytes: 796375797.6443319
num_examples: 158602
download_size: 3977251991
dataset_size: 3981873967
- config_name: TLOTKrishnaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2432774526.539302
num_examples: 476300
- name: validation
num_bytes: 810926544.7303492
num_examples: 158767
- name: test
num_bytes: 810926544.7303492
num_examples: 158767
download_size: 4050283714
dataset_size: 4054627616
- config_name: TLOTKrishnaI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2480494107.7215586
num_examples: 476670
- name: validation
num_bytes: 826831369.2405195
num_examples: 158890
- name: test
num_bytes: 826836573.0379218
num_examples: 158891
download_size: 4130987632
dataset_size: 4134162050
- config_name: TLOTKrishnaN_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2476823323.4861865
num_examples: 474258
- name: validation
num_bytes: 825607774.4953955
num_examples: 158086
- name: test
num_bytes: 825612997.0184178
num_examples: 158087
download_size: 8245933584
dataset_size: 4128044095
- config_name: TLOTManuBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2416789011.099815
num_examples: 479831
- name: validation
num_bytes: 805598015.9500924
num_examples: 159944
- name: test
num_bytes: 805598015.9500924
num_examples: 159944
download_size: 8022091215
dataset_size: 4027985042.9999995
- config_name: TLOTManuB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2401248706.737913
num_examples: 476523
- name: validation
num_bytes: 800416235.5793043
num_examples: 158841
- name: test
num_bytes: 800421274.6827825
num_examples: 158842
download_size: 3996692334
dataset_size: 4002086217
- config_name: TLOTManuI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2172777272.108018
num_examples: 474666
- name: validation
num_bytes: 724259090.7026726
num_examples: 158222
- name: test
num_bytes: 724263668.1893097
num_examples: 158223
download_size: 3613125844
dataset_size: 3621300031
- config_name: TLOTManuN_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2157988564.914396
num_examples: 473253
- name: validation
num_bytes: 719334081.5428022
num_examples: 157752
- name: test
num_bytes: 719334081.5428022
num_examples: 157752
download_size: 3588254209
dataset_size: 3596656728.0000005
- config_name: TLOTMenakaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2288615615.2453403
num_examples: 476286
- name: validation
num_bytes: 762876676.87733
num_examples: 158763
- name: test
num_bytes: 762876676.87733
num_examples: 158763
download_size: 3808214919
dataset_size: 3814368969
- config_name: TLOTMenakaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2265423732.440631
num_examples: 476485
- name: validation
num_bytes: 755144413.7796845
num_examples: 158829
- name: test
num_bytes: 755144413.7796845
num_examples: 158829
download_size: 7528268200
dataset_size: 3775712560.0000005
- config_name: TLOTMenakaI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2248679654.497752
num_examples: 476680
- name: validation
num_bytes: 749563029.751124
num_examples: 158894
- name: test
num_bytes: 749563029.751124
num_examples: 158894
download_size: 3740363965
dataset_size: 3747805714
- config_name: TLOTMenakaN_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2212555573.744489
num_examples: 476734
- name: validation
num_bytes: 737521618.6277553
num_examples: 158912
- name: test
num_bytes: 737521618.6277553
num_examples: 158912
download_size: 3679785782
dataset_size: 3687598810.9999995
- config_name: TLOTPavaniBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2581188469.774467
num_examples: 476364
- name: validation
num_bytes: 860401575.1127664
num_examples: 158789
- name: test
num_bytes: 860401575.1127664
num_examples: 158789
download_size: 4301716239
dataset_size: 4301991620
- config_name: TLOTPavaniB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2536569022.9252853
num_examples: 476365
- name: validation
num_bytes: 845526557.5373572
num_examples: 158789
- name: test
num_bytes: 845526557.5373572
num_examples: 158789
download_size: 4225675923
dataset_size: 4227622138
- config_name: TLOTPriyaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3230362124.4
num_examples: 475308
- name: validation
num_bytes: 1076787374.8
num_examples: 158436
- name: test
num_bytes: 1076787374.8
num_examples: 158436
download_size: 5395993279
dataset_size: 5383936874
- config_name: TLOTRajanBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3353184954.5082364
num_examples: 474312
- name: validation
num_bytes: 1117735387.7458818
num_examples: 158105
- name: test
num_bytes: 1117735387.7458818
num_examples: 158105
download_size: 5601810958
dataset_size: 5588655730
- config_name: TLOTRajanB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3333244214.4
num_examples: 473649
- name: validation
num_bytes: 1111081404.8
num_examples: 157883
- name: test
num_bytes: 1111081404.8
num_examples: 157883
download_size: 11147115559
dataset_size: 5555407024.000001
- config_name: TLOTRajaniBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2052738894.6
num_examples: 475389
- name: validation
num_bytes: 684246298.2
num_examples: 158463
- name: test
num_bytes: 684246298.2
num_examples: 158463
download_size: 3411081728
dataset_size: 3421231491
- config_name: TLOTRajaniB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2037547632.604111
num_examples: 475785
- name: validation
num_bytes: 679186826.6979445
num_examples: 158596
- name: test
num_bytes: 679186826.6979445
num_examples: 158596
download_size: 3385018225
dataset_size: 3395921286
- config_name: TLOTSanjanaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2209718743.6491027
num_examples: 475899
- name: validation
num_bytes: 736572914.5497009
num_examples: 158633
- name: test
num_bytes: 736577557.8011967
num_examples: 158634
download_size: 3674404765
dataset_size: 3682869216
- config_name: TLOTSanjanaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2217936060.895656
num_examples: 476629
- name: validation
num_bytes: 739315122.552172
num_examples: 158877
- name: test
num_bytes: 739315122.552172
num_examples: 158877
download_size: 3687984178
dataset_size: 3696566306
- config_name: TLOTSitaraBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2519685455.5459795
num_examples: 476097
- name: validation
num_bytes: 839900444.2270104
num_examples: 158700
- name: test
num_bytes: 839900444.2270104
num_examples: 158700
download_size: 4197747699
dataset_size: 4199486344
- config_name: TLOTSitaraB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2503669021.2
num_examples: 476304
- name: validation
num_bytes: 834556340.4
num_examples: 158768
- name: test
num_bytes: 834556340.4
num_examples: 158768
download_size: 4170641698
dataset_size: 4172781702
- config_name: TLOTSwamiB
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2425012348.9576674
num_examples: 477330
- name: validation
num_bytes: 808342530.0211664
num_examples: 159111
- name: test
num_bytes: 808342530.0211664
num_examples: 159111
download_size: 4038041582
dataset_size: 4041697409
- config_name: TLOTSwamiBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2850358898.466789
num_examples: 478777
- name: validation
num_bytes: 950123601.7666057
num_examples: 159593
- name: test
num_bytes: 950123601.7666057
num_examples: 159593
download_size: 4756940495
dataset_size: 4750606102
- config_name: TLOTSwamiB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2597770710.722685
num_examples: 475800
- name: validation
num_bytes: 865923570.240895
num_examples: 158600
- name: test
num_bytes: 865929030.0364199
num_examples: 158601
download_size: 4330358867
dataset_size: 4329623311
- config_name: TLOTVennela1B_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1858266228.4038165
num_examples: 476703
- name: validation
num_bytes: 619425974.2980918
num_examples: 158902
- name: test
num_bytes: 619425974.2980918
num_examples: 158902
download_size: 9264631387
dataset_size: 3097118177
- config_name: TLOTVennelaBI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2075214563.274462
num_examples: 475737
- name: validation
num_bytes: 691742549.862769
num_examples: 158580
- name: test
num_bytes: 691742549.862769
num_examples: 158580
download_size: 3449852145
dataset_size: 3458699663
- config_name: TLOTVennelaB_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1853628708.5342138
num_examples: 475764
- name: validation
num_bytes: 617876236.1780713
num_examples: 158588
- name: test
num_bytes: 617880132.287715
num_examples: 158589
download_size: 3076196686
dataset_size: 3089385077
- config_name: TLOTVennelaI_Ship
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2220159958.2
num_examples: 477489
- name: validation
num_bytes: 740053319.4
num_examples: 159163
- name: test
num_bytes: 740053319.4
num_examples: 159163
download_size: 3692812769
dataset_size: 3700266597
- config_name: TenaliRamakrishna-Regular
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1412098107.6
num_examples: 479922
- name: validation
num_bytes: 470699369.2
num_examples: 159974
- name: test
num_bytes: 470699369.2
num_examples: 159974
download_size: 2373061510
dataset_size: 2353496846
- config_name: Tikkana
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 237760800.6
num_examples: 476520
- name: validation
num_bytes: 79253600.2
num_examples: 158840
- name: test
num_bytes: 79253600.2
num_examples: 158840
download_size: 266272383
dataset_size: 396268001
- config_name: TimmanaRegular
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1476790008.6
num_examples: 478059
- name: validation
num_bytes: 492263336.2
num_examples: 159353
- name: test
num_bytes: 492263336.2
num_examples: 159353
download_size: 2461309068
dataset_size: 2461316681
- config_name: Vajram
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1522698226.9404452
num_examples: 480837
- name: validation
num_bytes: 507566075.64681506
num_examples: 160279
- name: test
num_bytes: 507569242.41273975
num_examples: 160280
download_size: 2548130724
dataset_size: 2537833545
- config_name: Vani
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1457020940.7032518
num_examples: 476385
- name: validation
num_bytes: 485673646.9010839
num_examples: 158795
- name: test
num_bytes: 485676705.39566433
num_examples: 158796
download_size: 2434817917
dataset_size: 2428371293
- config_name: Vanib
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1522290417.6
num_examples: 474951
- name: validation
num_bytes: 507430139.2
num_examples: 158317
- name: test
num_bytes: 507430139.2
num_examples: 158317
download_size: 2529233521
dataset_size: 2537150696
- config_name: Vemana
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1699154826.4604304
num_examples: 476205
- name: validation
num_bytes: 566388510.2697848
num_examples: 158736
- name: test
num_bytes: 566388510.2697848
num_examples: 158736
download_size: 2814457802
dataset_size: 2831931847
- config_name: akshar
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1339177104.1214905
num_examples: 476169
- name: validation
num_bytes: 446395180.4392547
num_examples: 158724
- name: test
num_bytes: 446395180.4392547
num_examples: 158724
download_size: 2284376294
dataset_size: 2231967465
- config_name: gautami
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1459193859.1610594
num_examples: 476425
- name: validation
num_bytes: 486399994.91947037
num_examples: 158809
- name: test
num_bytes: 486399994.91947037
num_examples: 158809
download_size: 2447315957
dataset_size: 2431993849
- config_name: gautamib
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1464740409.2608879
num_examples: 477459
- name: validation
num_bytes: 488249870.869556
num_examples: 159154
- name: test
num_bytes: 488249870.869556
num_examples: 159154
download_size: 2454242590
dataset_size: 2441240151
- config_name: lohit_te
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1566900366.462158
num_examples: 477809
- name: validation
num_bytes: 522301215.268921
num_examples: 159270
- name: test
num_bytes: 522301215.268921
num_examples: 159270
download_size: 2611413315
dataset_size: 2611502797
configs:
- config_name: Dhurjati
data_files:
- split: train
path: Dhurjati/train-*
- split: validation
path: Dhurjati/validation-*
- split: test
path: Dhurjati/test-*
- config_name: Gidugu
data_files:
- split: train
path: Gidugu/train-*
- split: validation
path: Gidugu/validation-*
- split: test
path: Gidugu/test-*
- config_name: Gurajada
data_files:
- split: train
path: Gurajada/train-*
- split: validation
path: Gurajada/validation-*
- split: test
path: Gurajada/test-*
- config_name: Mallanna
data_files:
- split: train
path: Mallanna/train-*
- split: validation
path: Mallanna/validation-*
- split: test
path: Mallanna/test-*
- config_name: Mandali-Regular
data_files:
- split: train
path: Mandali-Regular/train-*
- split: validation
path: Mandali-Regular/validation-*
- split: test
path: Mandali-Regular/test-*
- config_name: NATS
data_files:
- split: train
path: NATS/train-*
- split: validation
path: NATS/validation-*
- split: test
path: NATS/test-*
- config_name: NTR
data_files:
- split: train
path: NTR/train-*
- split: validation
path: NTR/validation-*
- split: test
path: NTR/test-*
- config_name: NotoSansTelugu-Bold
data_files:
- split: train
path: NotoSansTelugu-Bold/train-*
- split: validation
path: NotoSansTelugu-Bold/validation-*
- split: test
path: NotoSansTelugu-Bold/test-*
- config_name: NotoSansTelugu-Regular
data_files:
- split: train
path: NotoSansTelugu-Regular/train-*
- split: validation
path: NotoSansTelugu-Regular/validation-*
- split: test
path: NotoSansTelugu-Regular/test-*
- config_name: NotoSansTeluguUI-Bold
data_files:
- split: train
path: NotoSansTeluguUI-Bold/train-*
- split: validation
path: NotoSansTeluguUI-Bold/validation-*
- split: test
path: NotoSansTeluguUI-Bold/test-*
- config_name: NotoSansTeluguUI-Regular
data_files:
- split: train
path: NotoSansTeluguUI-Regular/train-*
- split: validation
path: NotoSansTeluguUI-Regular/validation-*
- split: test
path: NotoSansTeluguUI-Regular/test-*
- config_name: NotoSerifTelugu-VariableFont_wght
data_files:
- split: train
path: NotoSerifTelugu-VariableFont_wght/train-*
- split: validation
path: NotoSerifTelugu-VariableFont_wght/validation-*
- split: test
path: NotoSerifTelugu-VariableFont_wght/test-*
- config_name: Pothana2000
data_files:
- split: train
path: Pothana2000/train-*
- split: validation
path: Pothana2000/validation-*
- split: test
path: Pothana2000/test-*
- config_name: Ramabhadra
data_files:
- split: train
path: Ramabhadra/train-*
- split: validation
path: Ramabhadra/validation-*
- split: test
path: Ramabhadra/test-*
- config_name: Ramabhadra1
data_files:
- split: train
path: Ramabhadra1/train-*
- split: validation
path: Ramabhadra1/validation-*
- split: test
path: Ramabhadra1/test-*
- config_name: RamaneeyaWin
data_files:
- split: train
path: RamaneeyaWin/train-*
- split: validation
path: RamaneeyaWin/validation-*
- split: test
path: RamaneeyaWin/test-*
- config_name: Ramaraja-Regular
data_files:
- split: train
path: Ramaraja-Regular/train-*
- split: validation
path: Ramaraja-Regular/validation-*
- split: test
path: Ramaraja-Regular/test-*
- config_name: Suguna
data_files:
- split: train
path: Suguna/train-*
- split: validation
path: Suguna/validation-*
- split: test
path: Suguna/test-*
- config_name: Suranna
data_files:
- split: train
path: Suranna/train-*
- split: validation
path: Suranna/validation-*
- split: test
path: Suranna/test-*
- config_name: Suravara_Samhita
data_files:
- split: train
path: Suravara_Samhita/train-*
- split: validation
path: Suravara_Samhita/validation-*
- split: test
path: Suravara_Samhita/test-*
- config_name: Suravara_Swarna
data_files:
- split: train
path: Suravara_Swarna/train-*
- split: validation
path: Suravara_Swarna/validation-*
- split: test
path: Suravara_Swarna/test-*
- config_name: Suravara_Swarna_bold
data_files:
- split: train
path: Suravara_Swarna_bold/train-*
- split: validation
path: Suravara_Swarna_bold/validation-*
- split: test
path: Suravara_Swarna_bold/test-*
- config_name: Suravara_Swarna_italic
data_files:
- split: train
path: Suravara_Swarna_italic/train-*
- split: validation
path: Suravara_Swarna_italic/validation-*
- split: test
path: Suravara_Swarna_italic/test-*
- config_name: Suravaram
data_files:
- split: train
path: Suravaram/train-*
- split: validation
path: Suravaram/validation-*
- split: test
path: Suravaram/test-*
- config_name: TLOTAmmaBI_ship
data_files:
- split: train
path: TLOTAmmaBI_ship/train-*
- split: validation
path: TLOTAmmaBI_ship/validation-*
- split: test
path: TLOTAmmaBI_ship/test-*
- config_name: TLOTAmmaB_ship
data_files:
- split: train
path: TLOTAmmaB_ship/train-*
- split: validation
path: TLOTAmmaB_ship/validation-*
- split: test
path: TLOTAmmaB_ship/test-*
- config_name: TLOTAmmaI_ship
data_files:
- split: train
path: TLOTAmmaI_ship/train-*
- split: validation
path: TLOTAmmaI_ship/validation-*
- split: test
path: TLOTAmmaI_ship/test-*
- config_name: TLOTAmmaN_ship
data_files:
- split: train
path: TLOTAmmaN_ship/train-*
- split: validation
path: TLOTAmmaN_ship/validation-*
- split: test
path: TLOTAmmaN_ship/test-*
- config_name: TLOTAmrutaBI_Ship
data_files:
- split: train
path: TLOTAmrutaBI_Ship/train-*
- split: validation
path: TLOTAmrutaBI_Ship/validation-*
- split: test
path: TLOTAmrutaBI_Ship/test-*
- config_name: TLOTAmrutaB_Ship
data_files:
- split: train
path: TLOTAmrutaB_Ship/train-*
- split: validation
path: TLOTAmrutaB_Ship/validation-*
- split: test
path: TLOTAmrutaB_Ship/test-*
- config_name: TLOTAtreyaBI_Ship
data_files:
- split: train
path: TLOTAtreyaBI_Ship/train-*
- split: validation
path: TLOTAtreyaBI_Ship/validation-*
- split: test
path: TLOTAtreyaBI_Ship/test-*
- config_name: TLOTAtreyaB_Ship
data_files:
- split: train
path: TLOTAtreyaB_Ship/train-*
- split: validation
path: TLOTAtreyaB_Ship/validation-*
- split: test
path: TLOTAtreyaB_Ship/test-*
- config_name: TLOTAtreyaI_Ship
data_files:
- split: train
path: TLOTAtreyaI_Ship/train-*
- split: validation
path: TLOTAtreyaI_Ship/validation-*
- split: test
path: TLOTAtreyaI_Ship/test-*
- config_name: TLOTAtreyaN_Ship
data_files:
- split: train
path: TLOTAtreyaN_Ship/train-*
- split: validation
path: TLOTAtreyaN_Ship/validation-*
- split: test
path: TLOTAtreyaN_Ship/test-*
- config_name: TLOTChandanaBI_Ship
data_files:
- split: train
path: TLOTChandanaBI_Ship/train-*
- split: validation
path: TLOTChandanaBI_Ship/validation-*
- split: test
path: TLOTChandanaBI_Ship/test-*
- config_name: TLOTChandanaB_Ship
data_files:
- split: train
path: TLOTChandanaB_Ship/train-*
- split: validation
path: TLOTChandanaB_Ship/validation-*
- split: test
path: TLOTChandanaB_Ship/test-*
- config_name: TLOTDevaI_Ship
data_files:
- split: train
path: TLOTDevaI_Ship/train-*
- split: validation
path: TLOTDevaI_Ship/validation-*
- split: test
path: TLOTDevaI_Ship/test-*
- config_name: TLOTDevaN_Ship
data_files:
- split: train
path: TLOTDevaN_Ship/train-*
- split: validation
path: TLOTDevaN_Ship/validation-*
- split: test
path: TLOTDevaN_Ship/test-*
- config_name: TLOTDraupadiBI_Ship
data_files:
- split: train
path: TLOTDraupadiBI_Ship/train-*
- split: validation
path: TLOTDraupadiBI_Ship/validation-*
- split: test
path: TLOTDraupadiBI_Ship/test-*
- config_name: TLOTDraupadiB_ship
data_files:
- split: train
path: TLOTDraupadiB_ship/train-*
- split: validation
path: TLOTDraupadiB_ship/validation-*
- split: test
path: TLOTDraupadiB_ship/test-*
- config_name: TLOTDraupadiI_Ship
data_files:
- split: train
path: TLOTDraupadiI_Ship/train-*
- split: validation
path: TLOTDraupadiI_Ship/validation-*
- split: test
path: TLOTDraupadiI_Ship/test-*
- config_name: TLOTDraupadiN_Ship
data_files:
- split: train
path: TLOTDraupadiN_Ship/train-*
- split: validation
path: TLOTDraupadiN_Ship/validation-*
- split: test
path: TLOTDraupadiN_Ship/test-*
- config_name: TLOTGolkondaBI_Ship
data_files:
- split: train
path: TLOTGolkondaBI_Ship/train-*
- split: validation
path: TLOTGolkondaBI_Ship/validation-*
- split: test
path: TLOTGolkondaBI_Ship/test-*
- config_name: TLOTGolkondaB_Ship
data_files:
- split: train
path: TLOTGolkondaB_Ship/train-*
- split: validation
path: TLOTGolkondaB_Ship/validation-*
- split: test
path: TLOTGolkondaB_Ship/test-*
- config_name: TLOTKrishnaB_Ship
data_files:
- split: train
path: TLOTKrishnaB_Ship/train-*
- split: validation
path: TLOTKrishnaB_Ship/validation-*
- split: test
path: TLOTKrishnaB_Ship/test-*
- config_name: TLOTKrishnaI_Ship
data_files:
- split: train
path: TLOTKrishnaI_Ship/train-*
- split: validation
path: TLOTKrishnaI_Ship/validation-*
- split: test
path: TLOTKrishnaI_Ship/test-*
- config_name: TLOTKrishnaN_Ship
data_files:
- split: train
path: TLOTKrishnaN_Ship/train-*
- split: validation
path: TLOTKrishnaN_Ship/validation-*
- split: test
path: TLOTKrishnaN_Ship/test-*
- config_name: TLOTManuBI_Ship
data_files:
- split: train
path: TLOTManuBI_Ship/train-*
- split: validation
path: TLOTManuBI_Ship/validation-*
- split: test
path: TLOTManuBI_Ship/test-*
- config_name: TLOTManuB_Ship
data_files:
- split: train
path: TLOTManuB_Ship/train-*
- split: validation
path: TLOTManuB_Ship/validation-*
- split: test
path: TLOTManuB_Ship/test-*
- config_name: TLOTManuI_Ship
data_files:
- split: train
path: TLOTManuI_Ship/train-*
- split: validation
path: TLOTManuI_Ship/validation-*
- split: test
path: TLOTManuI_Ship/test-*
- config_name: TLOTManuN_Ship
data_files:
- split: train
path: TLOTManuN_Ship/train-*
- split: validation
path: TLOTManuN_Ship/validation-*
- split: test
path: TLOTManuN_Ship/test-*
- config_name: TLOTMenakaBI_Ship
data_files:
- split: train
path: TLOTMenakaBI_Ship/train-*
- split: validation
path: TLOTMenakaBI_Ship/validation-*
- split: test
path: TLOTMenakaBI_Ship/test-*
- config_name: TLOTMenakaB_Ship
data_files:
- split: train
path: TLOTMenakaB_Ship/train-*
- split: validation
path: TLOTMenakaB_Ship/validation-*
- split: test
path: TLOTMenakaB_Ship/test-*
- config_name: TLOTMenakaI_Ship
data_files:
- split: train
path: TLOTMenakaI_Ship/train-*
- split: validation
path: TLOTMenakaI_Ship/validation-*
- split: test
path: TLOTMenakaI_Ship/test-*
- config_name: TLOTMenakaN_Ship
data_files:
- split: train
path: TLOTMenakaN_Ship/train-*
- split: validation
path: TLOTMenakaN_Ship/validation-*
- split: test
path: TLOTMenakaN_Ship/test-*
- config_name: TLOTPavaniBI_Ship
data_files:
- split: train
path: TLOTPavaniBI_Ship/train-*
- split: validation
path: TLOTPavaniBI_Ship/validation-*
- split: test
path: TLOTPavaniBI_Ship/test-*
- config_name: TLOTPavaniB_Ship
data_files:
- split: train
path: TLOTPavaniB_Ship/train-*
- split: validation
path: TLOTPavaniB_Ship/validation-*
- split: test
path: TLOTPavaniB_Ship/test-*
- config_name: TLOTPriyaB_Ship
data_files:
- split: train
path: TLOTPriyaB_Ship/train-*
- split: validation
path: TLOTPriyaB_Ship/validation-*
- split: test
path: TLOTPriyaB_Ship/test-*
- config_name: TLOTRajanBI_Ship
data_files:
- split: train
path: TLOTRajanBI_Ship/train-*
- split: validation
path: TLOTRajanBI_Ship/validation-*
- split: test
path: TLOTRajanBI_Ship/test-*
- config_name: TLOTRajanB_Ship
data_files:
- split: train
path: TLOTRajanB_Ship/train-*
- split: validation
path: TLOTRajanB_Ship/validation-*
- split: test
path: TLOTRajanB_Ship/test-*
- config_name: TLOTRajaniBI_Ship
data_files:
- split: train
path: TLOTRajaniBI_Ship/train-*
- split: validation
path: TLOTRajaniBI_Ship/validation-*
- split: test
path: TLOTRajaniBI_Ship/test-*
- config_name: TLOTRajaniB_Ship
data_files:
- split: train
path: TLOTRajaniB_Ship/train-*
- split: validation
path: TLOTRajaniB_Ship/validation-*
- split: test
path: TLOTRajaniB_Ship/test-*
- config_name: TLOTSanjanaBI_Ship
data_files:
- split: train
path: TLOTSanjanaBI_Ship/train-*
- split: validation
path: TLOTSanjanaBI_Ship/validation-*
- split: test
path: TLOTSanjanaBI_Ship/test-*
- config_name: TLOTSanjanaB_Ship
data_files:
- split: train
path: TLOTSanjanaB_Ship/train-*
- split: validation
path: TLOTSanjanaB_Ship/validation-*
- split: test
path: TLOTSanjanaB_Ship/test-*
- config_name: TLOTSitaraBI_Ship
data_files:
- split: train
path: TLOTSitaraBI_Ship/train-*
- split: validation
path: TLOTSitaraBI_Ship/validation-*
- split: test
path: TLOTSitaraBI_Ship/test-*
- config_name: TLOTSitaraB_Ship
data_files:
- split: train
path: TLOTSitaraB_Ship/train-*
- split: validation
path: TLOTSitaraB_Ship/validation-*
- split: test
path: TLOTSitaraB_Ship/test-*
- config_name: TLOTSwamiBI_Ship
data_files:
- split: train
path: TLOTSwamiBI_Ship/train-*
- split: validation
path: TLOTSwamiBI_Ship/validation-*
- split: test
path: TLOTSwamiBI_Ship/test-*
- config_name: TLOTSwamiB_Ship
data_files:
- split: train
path: TLOTSwamiB_Ship/train-*
- split: validation
path: TLOTSwamiB_Ship/validation-*
- split: test
path: TLOTSwamiB_Ship/test-*
- config_name: TLOTVennela1B_Ship
data_files:
- split: train
path: TLOTVennela1B_Ship/train-*
- split: validation
path: TLOTVennela1B_Ship/validation-*
- split: test
path: TLOTVennela1B_Ship/test-*
- config_name: TLOTVennelaBI_Ship
data_files:
- split: train
path: TLOTVennelaBI_Ship/train-*
- split: validation
path: TLOTVennelaBI_Ship/validation-*
- split: test
path: TLOTVennelaBI_Ship/test-*
- config_name: TLOTVennelaI_Ship
data_files:
- split: train
path: TLOTVennelaI_Ship/train-*
- split: validation
path: TLOTVennelaI_Ship/validation-*
- split: test
path: TLOTVennelaI_Ship/test-*
- config_name: TenaliRamakrishna-Regular
data_files:
- split: train
path: TenaliRamakrishna-Regular/train-*
- split: validation
path: TenaliRamakrishna-Regular/validation-*
- split: test
path: TenaliRamakrishna-Regular/test-*
- config_name: TimmanaRegular
data_files:
- split: train
path: TimmanaRegular/train-*
- split: validation
path: TimmanaRegular/validation-*
- split: test
path: TimmanaRegular/test-*
- config_name: Vanib
data_files:
- split: train
path: Vanib/train-*
- split: validation
path: Vanib/validation-*
- split: test
path: Vanib/test-*
- config_name: Vemana
data_files:
- split: train
path: Vemana/train-*
- split: validation
path: Vemana/validation-*
- split: test
path: Vemana/test-*
- config_name: akshar
data_files:
- split: train
path: akshar/train-*
- split: validation
path: akshar/validation-*
- split: test
path: akshar/test-*
- config_name: gautami
data_files:
- split: train
path: gautami/train-*
- split: validation
path: gautami/validation-*
- split: test
path: gautami/test-*
- config_name: gautamib
data_files:
- split: train
path: gautamib/train-*
- split: validation
path: gautamib/validation-*
- split: test
path: gautamib/test-*
license: mit
task_categories:
- image-to-text
language:
- te
size_categories:
- 1M<n<10M
---
# Chitralekha
## Dataset Details
### Dataset Version
Some of the fonts do not have proper letters/rendering of different telugu letter combinations. Those have been removed as much as I can find them. If there are any other mistakes that you notice, please raise an issue and I will try my best to look into it
### Dataset Description
This extensive dataset, hosted on Huggingface, is a comprehensive resource for Optical Character Recognition (OCR) in the Telugu language, featuring an impressive array of 80+ configurations. Each configuration in this dataset corresponds to a unique font, meticulously curated by Dr. Rakesh Achanta and sourced from his GitHub repository (https://github.com/TeluguOCR/banti_telugu_ocr).
The dataset is specifically designed to support and enhance the development of OCR models, ranging from simple Convolutional Recurrent Neural Network (CRNN) architectures to more advanced systems like trOCR. The versatility of this dataset lies in its large volume and diversity, making it an ideal choice for researchers and developers aiming to build robust OCR systems for the Telugu script.
Key Features:
- Font Diversity: Over 80 unique fonts, each forming a separate configuration, providing a rich variety in text styles and nuances.
- Large Volume: Each configuration contains approximately 800,000 examples, summing up to a vast pool of data for comprehensive training and evaluation.
- Data Split: The dataset is pre-split into training, validation, and test sets, following a 60/20/20 ratio, to facilitate efficient model training and benchmarking.
- Use Cases: Ideal for developing a wide range of OCR models - from basic CRNNs to sophisticated models like trOCR.
- Accessibility: Hosted on Huggingface, ensuring easy access and integration with various machine learning frameworks and tools.
This dataset stands as a testament to Dr. Rakesh Achanta's dedication to enhancing Telugu language processing technologies. It is not just a tool for model development but also a gateway to preserving and digitizing the rich literary heritage of the Telugu language.
Researchers and developers leveraging this dataset are encouraged to adhere to the ethical guidelines of AI research and development, ensuring that the applications developed are for the benefit of language preservation, accessibility, and technological advancement in a responsible manner.
- **Fonts Curated by:** Dr. Rakesh Achanta
- **Shared by:** Krishna Sriharsha Gundu
- **Data Curated by:** Anusha Motamarri
- **Language(s) (NLP):** Telugu
### Ethical Considerations:
Researchers and developers leveraging this dataset are encouraged to adhere to the ethical guidelines of AI research and development. Applications developed using this dataset should prioritize:
- Language preservation and cultural heritage protection
- Improving accessibility of Telugu text for diverse user groups
- Responsible technological advancement in language processing
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [Original Books Dataset](https://github.com/AnushaMotamarri/Telugu-Books-Dataset) |
legacy-datasets/common_voice | legacy-datasets | 2024-08-22T08:27:23Z | 4,611 | 137 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"language:ab",
"language:ar",
"language:as",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:hi",
"language:hsb",
"language:hu",
"language:ia",
"language:id",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:ky",
"language:lg",
"language:lt",
"language:lv",
"language:mn",
"language:mt",
"language:nl",
"language:or",
"language:pa",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sl",
"language:sv",
"language:ta",
"language:th",
"language:tr",
"language:tt",
"language:uk",
"language:vi",
"language:vot",
"language:zh",
"license:cc0-1.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition"
] | 2022-03-02T23:29:22Z | null | ---
pretty_name: Common Voice
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sl
- sv
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh
language_bcp47:
- fy-NL
- ga-IE
- pa-IN
- rm-sursilv
- rm-vallader
- sv-SE
- zh-CN
- zh-HK
- zh-TW
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: common-voice
viewer: false
dataset_info:
- config_name: ab
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 1295622
num_examples: 22
- name: test
num_bytes: 411844
num_examples: 9
- name: validation
- name: other
num_bytes: 40023390
num_examples: 752
- name: validated
num_bytes: 1707426
num_examples: 31
- name: invalidated
num_bytes: 361626
num_examples: 8
download_size: 41038412
dataset_size: 43799908
- config_name: ar
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 359335168
num_examples: 14227
- name: test
num_bytes: 237546641
num_examples: 7622
- name: validation
num_bytes: 209606861
num_examples: 7517
- name: other
num_bytes: 515822404
num_examples: 18283
- name: validated
num_bytes: 1182522872
num_examples: 43291
- name: invalidated
num_bytes: 194805036
num_examples: 6333
download_size: 1756264615
dataset_size: 2699638982
- config_name: as
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 11442279
num_examples: 270
- name: test
num_bytes: 5071343
num_examples: 110
- name: validation
num_bytes: 5480156
num_examples: 124
- name: other
- name: validated
num_bytes: 21993698
num_examples: 504
- name: invalidated
num_bytes: 886145
num_examples: 31
download_size: 22226465
dataset_size: 44873621
- config_name: br
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 62238289
num_examples: 2780
- name: test
num_bytes: 54461339
num_examples: 2087
- name: validation
num_bytes: 46995570
num_examples: 1997
- name: other
num_bytes: 269858143
num_examples: 10912
- name: validated
num_bytes: 203503622
num_examples: 8560
- name: invalidated
num_bytes: 20861017
num_examples: 623
download_size: 465276982
dataset_size: 657917980
- config_name: ca
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 12966939466
num_examples: 285584
- name: test
num_bytes: 745761890
num_examples: 15724
- name: validation
num_bytes: 716442038
num_examples: 15724
- name: other
num_bytes: 2693542910
num_examples: 64446
- name: validated
num_bytes: 18115833966
num_examples: 416701
- name: invalidated
num_bytes: 850402888
num_examples: 18846
download_size: 20743110341
dataset_size: 36088923158
- config_name: cnh
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 18866674
num_examples: 807
- name: test
num_bytes: 24675321
num_examples: 752
- name: validation
num_bytes: 22162315
num_examples: 756
- name: other
num_bytes: 84878963
num_examples: 2934
- name: validated
num_bytes: 69330148
num_examples: 2432
- name: invalidated
num_bytes: 13642724
num_examples: 433
download_size: 161331331
dataset_size: 233556145
- config_name: cs
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 215205282
num_examples: 5655
- name: test
num_bytes: 148499476
num_examples: 4144
- name: validation
num_bytes: 148312130
num_examples: 4118
- name: other
num_bytes: 282225475
num_examples: 7475
- name: validated
num_bytes: 1019817024
num_examples: 30431
- name: invalidated
num_bytes: 24717823
num_examples: 685
download_size: 1271909933
dataset_size: 1838777210
- config_name: cv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 31649510
num_examples: 931
- name: test
num_bytes: 32513061
num_examples: 788
- name: validation
num_bytes: 28429779
num_examples: 818
- name: other
num_bytes: 288294623
num_examples: 6927
- name: validated
num_bytes: 126717875
num_examples: 3496
- name: invalidated
num_bytes: 57923138
num_examples: 1282
download_size: 439329081
dataset_size: 565527986
- config_name: cy
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 271642649
num_examples: 6839
- name: test
num_bytes: 206865596
num_examples: 4820
- name: validation
num_bytes: 201813388
num_examples: 4776
- name: other
num_bytes: 688469886
num_examples: 17919
- name: validated
num_bytes: 2763112391
num_examples: 72984
- name: invalidated
num_bytes: 146874576
num_examples: 3648
download_size: 3434474658
dataset_size: 4278778486
- config_name: de
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 11463160619
num_examples: 246525
- name: test
num_bytes: 744617681
num_examples: 15588
- name: validation
num_bytes: 729559862
num_examples: 15588
- name: other
num_bytes: 464513461
num_examples: 10095
- name: validated
num_bytes: 22402489041
num_examples: 565186
- name: invalidated
num_bytes: 1440604803
num_examples: 32789
download_size: 23283812097
dataset_size: 37244945467
- config_name: dv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 118576140
num_examples: 2680
- name: test
num_bytes: 94281409
num_examples: 2202
- name: validation
num_bytes: 94117088
num_examples: 2077
- name: other
- name: validated
num_bytes: 528571107
num_examples: 11866
- name: invalidated
num_bytes: 37694847
num_examples: 840
download_size: 540488041
dataset_size: 873240591
- config_name: el
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 80759076
num_examples: 2316
- name: test
num_bytes: 53820491
num_examples: 1522
- name: validation
num_bytes: 44818565
num_examples: 1401
- name: other
num_bytes: 186861175
num_examples: 5659
- name: validated
num_bytes: 204446790
num_examples: 5996
- name: invalidated
num_bytes: 6023769
num_examples: 185
download_size: 381570611
dataset_size: 576729866
- config_name: en
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 26088826658
num_examples: 564337
- name: test
num_bytes: 758718688
num_examples: 16164
- name: validation
num_bytes: 795638801
num_examples: 16164
- name: other
num_bytes: 5796244022
num_examples: 169895
- name: validated
num_bytes: 48425872575
num_examples: 1224864
- name: invalidated
num_bytes: 9122973965
num_examples: 189562
download_size: 60613063630
dataset_size: 90988274709
- config_name: eo
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 993655930
num_examples: 19587
- name: test
num_bytes: 420153812
num_examples: 8969
- name: validation
num_bytes: 391427586
num_examples: 8987
- name: other
num_bytes: 142476819
num_examples: 2946
- name: validated
num_bytes: 2603249289
num_examples: 58094
- name: invalidated
num_bytes: 238105462
num_examples: 4736
download_size: 2883560869
dataset_size: 4789068898
- config_name: es
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 6918333205
num_examples: 161813
- name: test
num_bytes: 754049291
num_examples: 15089
- name: validation
num_bytes: 735558084
num_examples: 15089
- name: other
num_bytes: 5528972205
num_examples: 144791
- name: validated
num_bytes: 9623788388
num_examples: 236314
- name: invalidated
num_bytes: 1664876264
num_examples: 40640
download_size: 16188844718
dataset_size: 25225577437
- config_name: et
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 161124199
num_examples: 2966
- name: test
num_bytes: 133183135
num_examples: 2509
- name: validation
num_bytes: 137604813
num_examples: 2507
- name: other
num_bytes: 30339130
num_examples: 569
- name: validated
num_bytes: 573417188
num_examples: 10683
- name: invalidated
num_bytes: 193019544
num_examples: 3557
download_size: 767174465
dataset_size: 1228688009
- config_name: eu
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 317322801
num_examples: 7505
- name: test
num_bytes: 238866501
num_examples: 5172
- name: validation
num_bytes: 228150083
num_examples: 5172
- name: other
num_bytes: 988079897
num_examples: 23570
- name: validated
num_bytes: 2621488299
num_examples: 63009
- name: invalidated
num_bytes: 208553909
num_examples: 5387
download_size: 3664586106
dataset_size: 4602461490
- config_name: fa
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 239255087
num_examples: 7593
- name: test
num_bytes: 217939210
num_examples: 5213
- name: validation
num_bytes: 196558067
num_examples: 5213
- name: other
num_bytes: 737017546
num_examples: 22510
- name: validated
num_bytes: 8120181903
num_examples: 251659
- name: invalidated
num_bytes: 499570226
num_examples: 11698
download_size: 8884585819
dataset_size: 10010522039
- config_name: fi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 16017393
num_examples: 460
- name: test
num_bytes: 16117529
num_examples: 428
- name: validation
num_bytes: 15471757
num_examples: 415
- name: other
num_bytes: 5836400
num_examples: 149
- name: validated
num_bytes: 47669391
num_examples: 1305
- name: invalidated
num_bytes: 2228215
num_examples: 59
download_size: 49882909
dataset_size: 103340685
- config_name: fr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 12439892070
num_examples: 298982
- name: test
num_bytes: 733943163
num_examples: 15763
- name: validation
num_bytes: 703801114
num_examples: 15763
- name: other
num_bytes: 117998889
num_examples: 3222
- name: validated
num_bytes: 17921836252
num_examples: 461004
- name: invalidated
num_bytes: 1794149368
num_examples: 40351
download_size: 19130141984
dataset_size: 33711620856
- config_name: fy-NL
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 159116360
num_examples: 3927
- name: test
num_bytes: 126913262
num_examples: 3020
- name: validation
num_bytes: 112288554
num_examples: 2790
- name: other
num_bytes: 893887467
num_examples: 21569
- name: validated
num_bytes: 429651922
num_examples: 10495
- name: invalidated
num_bytes: 38985422
num_examples: 1031
download_size: 1237743070
dataset_size: 1760842987
- config_name: ga-IE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 15396820
num_examples: 541
- name: test
num_bytes: 16611739
num_examples: 506
- name: validation
num_bytes: 14897739
num_examples: 497
- name: other
num_bytes: 61948768
num_examples: 2130
- name: validated
num_bytes: 93371649
num_examples: 3352
- name: invalidated
num_bytes: 10993268
num_examples: 409
download_size: 156553447
dataset_size: 213219983
- config_name: hi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 4860737
num_examples: 157
- name: test
num_bytes: 4728043
num_examples: 127
- name: validation
num_bytes: 5569352
num_examples: 135
- name: other
num_bytes: 4176110
num_examples: 139
- name: validated
num_bytes: 15158052
num_examples: 419
- name: invalidated
num_bytes: 2801051
num_examples: 60
download_size: 21424045
dataset_size: 37293345
- config_name: hsb
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 43049910
num_examples: 808
- name: test
num_bytes: 20929094
num_examples: 387
- name: validation
num_bytes: 8769458
num_examples: 172
- name: other
num_bytes: 3173841
num_examples: 62
- name: validated
num_bytes: 72748422
num_examples: 1367
- name: invalidated
num_bytes: 5589972
num_examples: 227
download_size: 79362060
dataset_size: 154260697
- config_name: hu
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 126163153
num_examples: 3348
- name: test
num_bytes: 57056435
num_examples: 1649
- name: validation
num_bytes: 50306925
num_examples: 1434
- name: other
num_bytes: 12051094
num_examples: 295
- name: validated
num_bytes: 234307671
num_examples: 6457
- name: invalidated
num_bytes: 5881521
num_examples: 169
download_size: 242758708
dataset_size: 485766799
- config_name: ia
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 96577153
num_examples: 3477
- name: test
num_bytes: 33204678
num_examples: 899
- name: validation
num_bytes: 67436779
num_examples: 1601
- name: other
num_bytes: 30937041
num_examples: 1095
- name: validated
num_bytes: 197248304
num_examples: 5978
- name: invalidated
num_bytes: 6769573
num_examples: 192
download_size: 226499645
dataset_size: 432173528
- config_name: id
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 63515863
num_examples: 2130
- name: test
num_bytes: 60711104
num_examples: 1844
- name: validation
num_bytes: 56963520
num_examples: 1835
- name: other
num_bytes: 206578628
num_examples: 6782
- name: validated
num_bytes: 272570942
num_examples: 8696
- name: invalidated
num_bytes: 16566129
num_examples: 470
download_size: 475918233
dataset_size: 676906186
- config_name: it
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 2555546829
num_examples: 58015
- name: test
num_bytes: 656285877
num_examples: 12928
- name: validation
num_bytes: 621955330
num_examples: 12928
- name: other
num_bytes: 671213467
num_examples: 14549
- name: validated
num_bytes: 4552252754
num_examples: 102579
- name: invalidated
num_bytes: 564610354
num_examples: 12189
download_size: 5585781573
dataset_size: 9621864611
- config_name: ja
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 27600264
num_examples: 722
- name: test
num_bytes: 26475556
num_examples: 632
- name: validation
num_bytes: 22098940
num_examples: 586
- name: other
num_bytes: 34588931
num_examples: 885
- name: validated
num_bytes: 106916400
num_examples: 3072
- name: invalidated
num_bytes: 17819020
num_examples: 504
download_size: 152879796
dataset_size: 235499111
- config_name: ka
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 47790695
num_examples: 1058
- name: test
num_bytes: 30301524
num_examples: 656
- name: validation
num_bytes: 24951079
num_examples: 527
- name: other
num_bytes: 2144603
num_examples: 44
- name: validated
num_bytes: 104135978
num_examples: 2275
- name: invalidated
num_bytes: 7004160
num_examples: 139
download_size: 104280554
dataset_size: 216328039
- config_name: kab
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 3219289101
num_examples: 120530
- name: test
num_bytes: 446453041
num_examples: 14622
- name: validation
num_bytes: 414159937
num_examples: 14622
- name: other
num_bytes: 2282481767
num_examples: 88021
- name: validated
num_bytes: 15310455176
num_examples: 573718
- name: invalidated
num_bytes: 581587104
num_examples: 18134
download_size: 17171606918
dataset_size: 22254426126
- config_name: ky
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 75460488
num_examples: 1955
- name: test
num_bytes: 57116561
num_examples: 1503
- name: validation
num_bytes: 61393867
num_examples: 1511
- name: other
num_bytes: 258081579
num_examples: 7223
- name: validated
num_bytes: 355742823
num_examples: 9236
- name: invalidated
num_bytes: 41007711
num_examples: 926
download_size: 579440853
dataset_size: 848803029
- config_name: lg
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 46910479
num_examples: 1250
- name: test
num_bytes: 26951803
num_examples: 584
- name: validation
num_bytes: 16709367
num_examples: 384
- name: other
num_bytes: 111180838
num_examples: 3110
- name: validated
num_bytes: 90606863
num_examples: 2220
- name: invalidated
num_bytes: 14069959
num_examples: 290
download_size: 208197149
dataset_size: 306429309
- config_name: lt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 34605356
num_examples: 931
- name: test
num_bytes: 19940391
num_examples: 466
- name: validation
num_bytes: 10462851
num_examples: 244
- name: other
num_bytes: 71150206
num_examples: 1629
- name: validated
num_bytes: 65138550
num_examples: 1644
- name: invalidated
num_bytes: 4414780
num_examples: 102
download_size: 135299706
dataset_size: 205712134
- config_name: lv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 67269173
num_examples: 2552
- name: test
num_bytes: 56937435
num_examples: 1882
- name: validation
num_bytes: 55289058
num_examples: 2002
- name: other
num_bytes: 40259801
num_examples: 1560
- name: validated
num_bytes: 179726893
num_examples: 6444
- name: invalidated
num_bytes: 4383319
num_examples: 143
download_size: 208307691
dataset_size: 403865679
- config_name: mn
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 89913910
num_examples: 2183
- name: test
num_bytes: 86737041
num_examples: 1862
- name: validation
num_bytes: 82343275
num_examples: 1837
- name: other
num_bytes: 146365394
num_examples: 3272
- name: validated
num_bytes: 327264827
num_examples: 7487
- name: invalidated
num_bytes: 31764232
num_examples: 667
download_size: 486369317
dataset_size: 764388679
- config_name: mt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 73850815
num_examples: 2036
- name: test
num_bytes: 66520195
num_examples: 1617
- name: validation
num_bytes: 56412066
num_examples: 1516
- name: other
num_bytes: 220666971
num_examples: 5714
- name: validated
num_bytes: 218212969
num_examples: 5747
- name: invalidated
num_bytes: 12328068
num_examples: 314
download_size: 425114242
dataset_size: 647991084
- config_name: nl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 321946148
num_examples: 9460
- name: test
num_bytes: 205287443
num_examples: 5708
- name: validation
num_bytes: 186095353
num_examples: 4938
- name: other
num_bytes: 801418
num_examples: 27
- name: validated
num_bytes: 1710636990
num_examples: 52488
- name: invalidated
num_bytes: 115133112
num_examples: 3308
download_size: 1741827548
dataset_size: 2539900464
- config_name: or
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 16067910
num_examples: 388
- name: test
num_bytes: 4270651
num_examples: 98
- name: validation
num_bytes: 5485937
num_examples: 129
- name: other
num_bytes: 177775963
num_examples: 4302
- name: validated
num_bytes: 25824418
num_examples: 615
- name: invalidated
num_bytes: 2701922
num_examples: 62
download_size: 199077358
dataset_size: 232126801
- config_name: pa-IN
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 7572499
num_examples: 211
- name: test
num_bytes: 4375532
num_examples: 116
- name: validation
num_bytes: 1702492
num_examples: 44
- name: other
num_bytes: 56683312
num_examples: 1411
- name: validated
num_bytes: 13650443
num_examples: 371
- name: invalidated
num_bytes: 1690766
num_examples: 43
download_size: 69748265
dataset_size: 85675044
- config_name: pl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 273394509
num_examples: 7468
- name: test
num_bytes: 205047541
num_examples: 5153
- name: validation
num_bytes: 195917307
num_examples: 5153
- name: other
num_bytes: 442144781
num_examples: 12848
- name: validated
num_bytes: 3150860197
num_examples: 90791
- name: invalidated
num_bytes: 180801918
num_examples: 4601
download_size: 3537012341
dataset_size: 4448166253
- config_name: pt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 231451724
num_examples: 6514
- name: test
num_bytes: 180108694
num_examples: 4641
- name: validation
num_bytes: 165966139
num_examples: 4592
- name: other
num_bytes: 283497435
num_examples: 8390
- name: validated
num_bytes: 1480529669
num_examples: 41584
- name: invalidated
num_bytes: 67948392
num_examples: 1740
download_size: 1704252567
dataset_size: 2409502053
- config_name: rm-sursilv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 62396326
num_examples: 1384
- name: test
num_bytes: 51707733
num_examples: 1194
- name: validation
num_bytes: 52114252
num_examples: 1205
- name: other
num_bytes: 93351293
num_examples: 2102
- name: validated
num_bytes: 166218231
num_examples: 3783
- name: invalidated
num_bytes: 30593270
num_examples: 639
download_size: 275950479
dataset_size: 456381105
- config_name: rm-vallader
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 29528457
num_examples: 574
- name: test
num_bytes: 18805466
num_examples: 378
- name: validation
num_bytes: 17012341
num_examples: 357
- name: other
num_bytes: 36890435
num_examples: 727
- name: validated
num_bytes: 65711922
num_examples: 1316
- name: invalidated
num_bytes: 9356204
num_examples: 374
download_size: 108113989
dataset_size: 177304825
- config_name: ro
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 107235430
num_examples: 3399
- name: test
num_bytes: 60106568
num_examples: 1778
- name: validation
num_bytes: 30358457
num_examples: 858
- name: other
num_bytes: 65805210
num_examples: 1945
- name: validated
num_bytes: 197820619
num_examples: 6039
- name: invalidated
num_bytes: 11108104
num_examples: 485
download_size: 261978702
dataset_size: 472434388
- config_name: ru
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 686168722
num_examples: 15481
- name: test
num_bytes: 385349488
num_examples: 8007
- name: validation
num_bytes: 361164462
num_examples: 7963
- name: other
num_bytes: 450644862
num_examples: 10247
- name: validated
num_bytes: 3212213931
num_examples: 74256
- name: invalidated
num_bytes: 145739451
num_examples: 3056
download_size: 3655676916
dataset_size: 5241280916
- config_name: rw
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 21645788973
num_examples: 515197
- name: test
num_bytes: 707959382
num_examples: 15724
- name: validation
num_bytes: 698662384
num_examples: 15032
- name: other
num_bytes: 923146896
num_examples: 22923
- name: validated
num_bytes: 35011249432
num_examples: 832929
- name: invalidated
num_bytes: 7969286423
num_examples: 206790
download_size: 42545189583
dataset_size: 66956093490
- config_name: sah
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 68286985
num_examples: 1442
- name: test
num_bytes: 38534020
num_examples: 757
- name: validation
num_bytes: 17900397
num_examples: 405
- name: other
num_bytes: 62594222
num_examples: 1275
- name: validated
num_bytes: 124800352
num_examples: 2606
- name: invalidated
num_bytes: 3594160
num_examples: 66
download_size: 181245626
dataset_size: 315710136
- config_name: sl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 66122967
num_examples: 2038
- name: test
num_bytes: 26872195
num_examples: 881
- name: validation
num_bytes: 16353097
num_examples: 556
- name: other
num_bytes: 79268518
num_examples: 2502
- name: validated
num_bytes: 148371273
num_examples: 4669
- name: invalidated
num_bytes: 3048301
num_examples: 92
download_size: 222751292
dataset_size: 340036351
- config_name: sv-SE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 62727263
num_examples: 2331
- name: test
num_bytes: 59127381
num_examples: 2027
- name: validation
num_bytes: 53846355
num_examples: 2019
- name: other
num_bytes: 109970049
num_examples: 3043
- name: validated
num_bytes: 327049001
num_examples: 12552
- name: invalidated
num_bytes: 13462567
num_examples: 462
download_size: 421434184
dataset_size: 626182616
- config_name: ta
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 69052658
num_examples: 2009
- name: test
num_bytes: 67616865
num_examples: 1781
- name: validation
num_bytes: 63248009
num_examples: 1779
- name: other
num_bytes: 246650792
num_examples: 7428
- name: validated
num_bytes: 438961956
num_examples: 12652
- name: invalidated
num_bytes: 23587453
num_examples: 594
download_size: 679766097
dataset_size: 909117733
- config_name: th
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 100435725
num_examples: 2917
- name: test
num_bytes: 82030679
num_examples: 2188
- name: validation
num_bytes: 63237632
num_examples: 1922
- name: other
num_bytes: 95235301
num_examples: 2671
- name: validated
num_bytes: 245734783
num_examples: 7028
- name: invalidated
num_bytes: 18247080
num_examples: 467
download_size: 341305736
dataset_size: 604921200
- config_name: tr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 57879052
num_examples: 1831
- name: test
num_bytes: 60268059
num_examples: 1647
- name: validation
num_bytes: 54914798
num_examples: 1647
- name: other
num_bytes: 10954154
num_examples: 325
- name: validated
num_bytes: 585777527
num_examples: 18685
- name: invalidated
num_bytes: 59288266
num_examples: 1726
download_size: 620848700
dataset_size: 829081856
- config_name: tt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 348132697
num_examples: 11211
- name: test
num_bytes: 135120057
num_examples: 4485
- name: validation
num_bytes: 61690964
num_examples: 2127
- name: other
num_bytes: 62158038
num_examples: 1798
- name: validated
num_bytes: 767791517
num_examples: 25781
- name: invalidated
num_bytes: 10403128
num_examples: 287
download_size: 777153207
dataset_size: 1385296401
- config_name: uk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 161925063
num_examples: 4035
- name: test
num_bytes: 138422211
num_examples: 3235
- name: validation
num_bytes: 135483169
num_examples: 3236
- name: other
num_bytes: 327979131
num_examples: 8161
- name: validated
num_bytes: 889863965
num_examples: 22337
- name: invalidated
num_bytes: 55745301
num_examples: 1255
download_size: 1218559031
dataset_size: 1709418840
- config_name: vi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 6244454
num_examples: 221
- name: test
num_bytes: 6656365
num_examples: 198
- name: validation
num_bytes: 6531856
num_examples: 200
- name: other
num_bytes: 31315434
num_examples: 870
- name: validated
num_bytes: 19432595
num_examples: 619
- name: invalidated
num_bytes: 2981661
num_examples: 78
download_size: 51929480
dataset_size: 73162365
- config_name: vot
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 146467
num_examples: 3
- name: test
- name: validation
- name: other
num_bytes: 7963322
num_examples: 411
- name: validated
num_bytes: 146467
num_examples: 3
- name: invalidated
num_bytes: 107949
num_examples: 6
download_size: 7792602
dataset_size: 8364205
- config_name: zh-CN
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 793667379
num_examples: 18541
- name: test
num_bytes: 420202544
num_examples: 8760
- name: validation
num_bytes: 396096323
num_examples: 8743
- name: other
num_bytes: 381264783
num_examples: 8948
- name: validated
num_bytes: 1618113625
num_examples: 36405
- name: invalidated
num_bytes: 266234479
num_examples: 5305
download_size: 2184602350
dataset_size: 3875579133
- config_name: zh-HK
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 221459521
num_examples: 7506
- name: test
num_bytes: 217627041
num_examples: 5172
- name: validation
num_bytes: 196071110
num_examples: 5172
- name: other
num_bytes: 1319233252
num_examples: 38830
- name: validated
num_bytes: 1482087591
num_examples: 41835
- name: invalidated
num_bytes: 124170969
num_examples: 2999
download_size: 2774145806
dataset_size: 3560649484
- config_name: zh-TW
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 97323787
num_examples: 3507
- name: test
num_bytes: 85512325
num_examples: 2895
- name: validation
num_bytes: 80402637
num_examples: 2895
- name: other
num_bytes: 623801957
num_examples: 22477
- name: validated
num_bytes: 1568842090
num_examples: 61232
- name: invalidated
num_bytes: 100241443
num_examples: 3584
download_size: 2182836295
dataset_size: 2556124239
config_names:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
---
# Dataset Card for common_voice
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "common_voice" is deprecated and will soon be deleted. Use datasets under <a href="https://huggingface.co/mozilla-foundation">mozilla-foundation</a> organisation instead. For example, you can load <a href="https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0">Common Voice 13</a> dataset via <code>load_dataset("mozilla-foundation/common_voice_13_0", "en")</code></p>
</div>
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://commonvoice.mozilla.org/en/datasets
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.
The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
`
{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': `nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}
`
### Data Fields
client_id: An id for which client (voice) made the recording
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
up_votes: How many upvotes the audio file has received from reviewers
down_votes: How many downvotes the audio file has received from reviewers
age: The age of the speaker.
gender: The gender of the speaker
accent: Accent of the speaker
locale: The locale of the speaker
segment: Usually empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and recieved downvotes that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
### Contributions
Thanks to [@BirgerMoell](https://github.com/BirgerMoell) for adding this dataset. |
AI4Sec/cti-bench | AI4Sec | 2024-08-17T00:58:57Z | 2,536 | 6 | [
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.07599",
"doi:10.57967/hf/2506",
"region:us",
"cti",
"cyber threat intelligence",
"llm"
] | [
"zero-shot-classification",
"question-answering",
"text-classification"
] | 2024-06-12T10:24:23Z | 2 | ---
license: cc-by-nc-sa-4.0
task_categories:
- zero-shot-classification
- question-answering
- text-classification
language:
- en
tags:
- cti
- cyber threat intelligence
- llm
pretty_name: CTIBench
size_categories:
- 1K<n<10K
configs:
- config_name: cti-mcq
data_files:
- split: test
path: "cti-mcq.tsv"
sep: "\t"
- config_name: cti-rcm
data_files:
- split: test
path: "cti-rcm.tsv"
sep: "\t"
- config_name: cti-vsp
data_files:
- split: test
path: "cti-vsp.tsv"
sep: "\t"
- config_name: cti-taa
data_files:
- split: test
path: "cti-taa.tsv"
sep: "\t"
- config_name: cti-ate
data_files:
- split: test
path: "cti-ate.tsv"
sep: "\t"
- config_name: cti-rcm-2021
data_files:
- split: test
path: "cti-rcm-2021.tsv"
sep: "\t"
---
# Dataset Card for CTIBench
<!-- Provide a quick summary of the dataset. -->
A set of benchmark tasks designed to evaluate large language models (LLMs) on cyber threat intelligence (CTI) tasks.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
CTIBench is a comprehensive suite of benchmark tasks and datasets designed to evaluate LLMs in the field of CTI.
Components:
- CTI-MCQ: A knowledge evaluation dataset with multiple-choice questions to assess the LLMs' understanding of CTI standards, threats, detection strategies, mitigation plans, and best practices. This dataset is built using authoritative sources and standards within the CTI domain, including NIST, MITRE, and GDPR.
- CTI-RCM: A practical task that involves mapping Common Vulnerabilities and Exposures (CVE) descriptions to Common Weakness Enumeration (CWE) categories. This task evaluates the LLMs' ability to understand and classify cyber threats.
- CTI-VSP: Another practical task that requires calculating the Common Vulnerability Scoring System (CVSS) scores. This task assesses the LLMs' ability to evaluate the severity of cyber vulnerabilities.
- CTI-TAA: A task that involves analyzing publicly available threat reports and attributing them to specific threat actors or malware families. This task tests the LLMs' capability to understand historical cyber threat behavior and identify meaningful correlations.
- **Curated by:** Md Tanvirul Alam & Dipkamal Bhusal (RIT)
<!--
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
-->
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
**Repository:** https://github.com/xashru/cti-bench
<!--
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
-->
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
CTIBench is designed to provide a comprehensive evaluation framework for large language models (LLMs) within the domain of cyber threat intelligence (CTI).
Dataset designed in CTIBench assess the understanding of CTI standards, threats, detection strategies, mitigation plans, and best practices by LLMs,
and evaluates the LLMs' ability to understand, and analyze about cyber threats and vulnerabilities.
<!--
### Direct Use
This section describes suitable use cases for the dataset.
[More Information Needed]
-->
<!--
### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
<!--
[More Information Needed]
-->
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset consists of 5 TSV files, each corresponding to a different task. Each TSV file contains a "Prompt" column used to pose questions to the LLM.
Most files also include a "GT" column that contains the ground truth for the questions, except for "cti-taa.tsv".
The evaluation scripts for the different tasks are available in the associated GitHub repository.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
This dataset was curated to evaluate the ability of LLMs to understand and analyze various aspects of open-source CTI.
### Source Data
<!--
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...).
-->
The dataset includes URLs indicating the sources from which the data was collected.
<!--
#### Data Collection and Processing
This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc.
[More Information Needed]
-->
<!--
#### Who are the source data producers?
This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available.
[More Information Needed]
-->
#### Personal and Sensitive Information
<!--
State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process.
-->
The dataset does not contain any personal or sensitive information.
<!--
## Bias, Risks, and Limitations
This section is meant to convey both technical and sociotechnical limitations.
[More Information Needed]
-->
<!--
### Recommendations
This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
-->
## Citation
The paper can be found at: https://arxiv.org/abs/2406.07599
**BibTeX:**
```bibtex
@misc{alam2024ctibench,
title={CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence},
author={Md Tanvirul Alam and Dipkamal Bhushal and Le Nguyen and Nidhi Rastogi},
year={2024},
eprint={2406.07599},
archivePrefix={arXiv},
primaryClass={cs.CR}
}
```
<!--
**APA:**
[More Information Needed]
-->
<!--
## Glossary [optional]
If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card.
[More Information Needed]
-->
<!--
## More Information [optional]
[More Information Needed]
-->
<!--
## Dataset Card Authors [optional]
[More Information Needed]
-->
## Dataset Card Contact
Md Tanvirul Alam (ma8235 @ rit . edu)
|
arcee-ai/The-Tome | arcee-ai | 2024-08-15T16:08:27Z | 267 | 90 | [
"license:mit",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-07-22T01:51:38Z | null | ---
license: mit
---
<div align="center">
<img src="https://i.ibb.co/0jqCGH6/LEW5-CGBKRv-CWKNf-KYkf-k-Q.jpg" alt="The Tome" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
The Tome is a curated dataset designed for training large language models with a focus on instruction following. It was used in the training of our Arcee-Nova/Spark models, which was later merged with Qwen2-72B-Instruct (or 7B with the Spark model).
## Dataset Composition
- **Total Samples**: 1.75 million
- **Source**: Compiled from 9 publicly available datasets
The Tome is comprised of the following datasets:
```markdown
arcee-ai/infini-instruct-top-500k (BAAI/Infinity-Instruct)
TIGER-Lab/WebInstructSub (top-500k)
jondurbin/airoboros-3.2
gardner/glaive-function-calling-v2-sharegpt
arcee-ai/reasoning-sharegpt (SkunkworksAI/reasoning-0.01)
arcee-ai/self-instruct-sharegpt (bigcode/self-oss-instruct-sc2-exec-filter-50k)
cognitivecomputations/ultrainteract_trajectories_sharegpt
cognitivecomputations/SystemChat-2.0
arcee-ai/qwen2-72b-magpie-en
```
## Curation Process
The dataset underwent a curation process to ensure high-quality content:
1. **Reranker**: Applied for instruction following on Infini-Instruct and WebInstruct.
2. **Educational Value Scoring**: Used the fineweb-edu classifier on Infini-Instruct and WebInstruct
3. **Composite Scoring**: Scores from the custom reranker and fineweb-edu classifier were averaged.
## Usage in Model Training
The Tome was instrumental in the development of the Nova model, which was subsequently merged with Qwen2-72B-Instruct:
- **Merge Process**:
- Lower layers primarily from Qwen2-72B-Instruct
- Higher layers primarily from Nova-Premerge |
NbAiLab/NPSC | NbAiLab | 2024-08-14T13:57:39Z | 557 | 8 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:no",
"language:nb",
"language:nn",
"license:cc0-1.0",
"region:us",
"speech-modeling"
] | [
"automatic-speech-recognition",
"audio-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- 'no'
- nb
- nn
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
pretty_name: NPSC
tags:
- speech-modeling
---
# Dataset Card for NbAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:[email protected])
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset.
This repository contains a version of the NPSC in the 🤗 Dataset Format. Note that the official release of the dataset, which can be found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/), contains more information than the version found here, including word-level metadata, metadata about the speakers, and detailed documentation.
## How to Use
```python
# Loads the 16K Bokmål corpus in streaming mode
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", config="16K_mp3_bokmaal", streaming=True)
```
## Dataset Summary
The NPSC dataset contains JSON lines with language training data. The data loader will add audio data to this structure. Here is an example json object:
```json
{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"meeting_date": "20170110",
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246,
"end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav","array": [.......]}
}
```
## Data Fields
|**Key** | **Type** | **Description** |
|:-----------|:------------|:------------|
|**sentence_id:** | Integer | Unique identifier of the sentence |
|**sentence_order** | Integer | A number indicating the order of the sentences in the meeting |
|**speaker_id** | Integer | The ID of the speaker. This can be linked to the original dataset containing thorough demographic and dialectal information about the speaker. |
|**meeting_date** | String | The date for the meeting in the format __yyyymmdd__ |
| **speaker_name** | String | Name of the speaker. All speakers were members of the Norwegian Parliament or members of the Norwegian Government at the meeting date |
| **sentence_text** | String | The sentence text. The transcribed text string of the sentence in non-normalized form. This is the text of the manual transcriptions, without any postprocessing (apart from corrections of known errors). It may contain interrupted words, non-standard words and function words with a pronunciation deviating from the written form. Detailed metadata about the words in the sentence can be found in the word-tokenized version of the corpus in the official release of the dataset. |
| **sentence_language_code** | String | The language code of the sentence. The following alternatives exists in the file: ['nb-NO'. 'nn-NO', 'en-US']|
| **text** | String | sentence text. This is a copy of "sentence_text". It is included here to make it more convenient to interleave with other datasets.|
| **start_time** | Integer | The start time of the sentence in milliseconds. This time is relative to the start of audiofile of the entire meeting, which can be accessed in the official release |
| **end_time** | Integer | End time. See comment above. |
| **normsentence_text** | String | Normalized sentence text. In this version of the transcription, numbers and dates are written in digits on standardized formats, and common abbreviations are used. These modifications to the original transcriptions are produced automatically using normalization grammars |
| **transsentence_text** | String | Translated sentence text. Whenever the original transcription is in Bokmål (nb-NO), this field contains a machine-translated version in Nynorsk (nn-NO), and vice versa |
| **translated** | Integer | A flag indicating whether a machine-translated version has been produced or not. Sentences in en-US have not been translated |
| **audio** | Array | The dataloader will encode the accociated audio files and provide them as an array containing 'path', 'sound array','sampling_rate' |
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. [Javier de la Rosa](mailto:[email protected]), [Freddy Wetjen](mailto:[email protected]), [Per Egil Kummervold](mailto:[email protected]), and [Andre Kaasen](mailto:[email protected]) all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
## License
The sound and the transcriptions are released under the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/). The curation of the HuggingFace Dataset is released under [CC-BY-SA-3-license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
```
@inproceedings{solberg2022norwegian,
title={The Norwegian Parliamentary Speech Corpus},
author={Solberg, Per Erik and Ortiz, Pablo},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
url={http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.106.pdf},
year={2022}
}
```
|
keithito/lj_speech | keithito | 2024-08-14T11:13:15Z | 1,034 | 54 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unlicense",
"size_categories:10K<n<100K",
"region:us"
] | [
"automatic-speech-recognition",
"text-to-speech",
"text-to-audio"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unlicense
multilinguality:
- monolingual
paperswithcode_id: ljspeech
pretty_name: LJ Speech
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-to-speech
- text-to-audio
task_ids: []
train-eval-index:
- config: main
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
col_mapping:
file: path
text: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: file
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
config_name: main
splits:
- name: train
num_bytes: 4667022
num_examples: 13100
download_size: 2748572632
dataset_size: 4667022
---
# Dataset Card for lj_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The LJ Speech Dataset](https://keithito.com/LJ-Speech-Dataset/)
- **Repository:** [N/A]
- **Paper:** [N/A]
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech)
- **Point of Contact:** [Keith Ito](mailto:[email protected])
### Dataset Summary
This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books in English. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Automatic Speech Recognition (ASR) or Text-to-Speech (TTS).
- `automatic-speech-recognition`: An ASR model is presented with an audio file and asked to transcribe the audio file to written text.
The most common ASR evaluation metric is the word error rate (WER).
- `text-to-speech`, `text-to-audio`: A TTS model is given a written text in natural language and asked to generate a speech audio file.
A reasonable evaluation metric is the mean opinion score (MOS) of audio quality.
The dataset has an active leaderboard which can be found at https://paperswithcode.com/sota/text-to-speech-synthesis-on-ljspeech
### Languages
The transcriptions and audio are in English.
## Dataset Structure
### Data Instances
A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
A normalized version of the text is also provided.
```
{
'id': 'LJ002-0026',
'file': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
'audio': {'path': '/datasets/downloads/extracted/05bfe561f096e4c52667e3639af495226afe4e5d08763f2d76d069e7a453c543/LJSpeech-1.1/wavs/LJ002-0026.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 22050},
'text': 'in the three years between 1813 and 1816,'
'normalized_text': 'in the three years between eighteen thirteen and eighteen sixteen,',
}
```
Each audio file is a single-channel 16-bit PCM WAV with a sample rate of 22050 Hz.
### Data Fields
- id: unique id of the data sample.
- file: a path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- normalized_text: the transcription with numbers, ordinals, and monetary units expanded into full words.
### Data Splits
The dataset is not pre-split. Some statistics:
- Total Clips: 13,100
- Total Words: 225,715
- Total Characters: 1,308,678
- Total Duration: 23:55:17
- Mean Clip Duration: 6.57 sec
- Min Clip Duration: 1.11 sec
- Max Clip Duration: 10.10 sec
- Mean Words per Clip: 17.23
- Distinct Words: 13,821
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
This dataset consists of excerpts from the following works:
- Morris, William, et al. Arts and Crafts Essays. 1893.
- Griffiths, Arthur. The Chronicles of Newgate, Vol. 2. 1884.
- Roosevelt, Franklin D. The Fireside Chats of Franklin Delano Roosevelt. 1933-42.
- Harland, Marion. Marion Harland's Cookery for Beginners. 1893.
- Rolt-Wheeler, Francis. The Science - History of the Universe, Vol. 5: Biology. 1910.
- Banks, Edgar J. The Seven Wonders of the Ancient World. 1916.
- President's Commission on the Assassination of President Kennedy. Report of the President's Commission on the Assassination of President Kennedy. 1964.
Some details about normalization:
- The normalized transcription has the numbers, ordinals, and monetary units expanded into full words (UTF-8)
- 19 of the transcriptions contain non-ASCII characters (for example, LJ016-0257 contains "raison d'être").
- The following abbreviations appear in the text. They may be expanded as follows:
| Abbreviation | Expansion |
|--------------|-----------|
| Mr. | Mister |
| Mrs. | Misess (*) |
| Dr. | Doctor |
| No. | Number |
| St. | Saint |
| Co. | Company |
| Jr. | Junior |
| Maj. | Major |
| Gen. | General |
| Drs. | Doctors |
| Rev. | Reverend |
| Lt. | Lieutenant |
| Hon. | Honorable |
| Sgt. | Sergeant |
| Capt. | Captain |
| Esq. | Esquire |
| Ltd. | Limited |
| Col. | Colonel |
| Ft. | Fort |
(*) there's no standard expansion for "Mrs."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
- The audio clips range in length from approximately 1 second to 10 seconds. They were segmented automatically based on silences in the recording. Clip boundaries generally align with sentence or clause boundaries, but not always.
- The text was matched to the audio manually, and a QA pass was done to ensure that the text accurately matched the words spoken in the audio.
#### Who are the annotators?
Recordings by Linda Johnson from LibriVox. Alignment and annotation by Keith Ito.
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
- The original LibriVox recordings were distributed as 128 kbps MP3 files. As a result, they may contain artifacts introduced by the MP3 encoding.
## Additional Information
### Dataset Curators
The dataset was initially created by Keith Ito and Linda Johnson.
### Licensing Information
Public Domain ([LibriVox](https://librivox.org/pages/public-domain/))
### Citation Information
```
@misc{ljspeech17,
author = {Keith Ito and Linda Johnson},
title = {The LJ Speech Dataset},
howpublished = {\url{https://keithito.com/LJ-Speech-Dataset/}},
year = 2017
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset. |
openslr/librispeech_asr | openslr | 2024-08-14T10:48:50Z | 12,204 | 144 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition",
"audio-classification"
] | 2022-03-02T23:29:22Z | null | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
HuggingFaceTB/cosmopedia | HuggingFaceTB | 2024-08-12T22:05:49Z | 38,739 | 608 | [
"language:en",
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2309.05463",
"arxiv:2306.11644",
"region:us",
"synthetic"
] | [] | 2024-02-18T20:23:48Z | null | ---
dataset_info:
- config_name: auto_math_text
features:
- name: prompt
dtype: string
- name: text_token_length
dtype: int64
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 8777587297.907892
num_examples: 1949895
download_size: 4461401898
dataset_size: 8777587297.907892
- config_name: khanacademy
features:
- name: prompt
dtype: string
- name: text_token_length
dtype: int64
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 108591354.09210858
num_examples: 24123
download_size: 49139761
dataset_size: 108591354.09210858
- config_name: openstax
features:
- name: text_token_length
dtype: int64
- name: prompt
dtype: string
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 667837450
num_examples: 126332
download_size: 346992522
dataset_size: 667837450
- config_name: stanford
features:
- name: text_token_length
dtype: int64
- name: prompt
dtype: string
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 6341291506
num_examples: 1020024
download_size: 3302284560
dataset_size: 6341291506
- config_name: stories
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: text_token_length
dtype: int64
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 21314739648
num_examples: 4992964
download_size: 11902294709
dataset_size: 21314739648
- config_name: web_samples_v1
features:
- name: text_token_length
dtype: int64
- name: prompt
dtype: string
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 69075726295
num_examples: 12426348
download_size: 38978124936
dataset_size: 69075726295
- config_name: web_samples_v2
features:
- name: text_token_length
dtype: int64
- name: prompt
dtype: string
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 58711802939
num_examples: 10345867
download_size: 32658254617
dataset_size: 58711802939
- config_name: wikihow
features:
- name: text_token_length
dtype: int64
- name: prompt
dtype: string
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 892720528
num_examples: 179191
download_size: 502284600
dataset_size: 892720528
configs:
- config_name: auto_math_text
data_files:
- split: train
path: data/auto_math_text/train-*
- config_name: khanacademy
data_files:
- split: train
path: data/khanacademy/train-*
- config_name: openstax
data_files:
- split: train
path: data/openstax/train-*
- config_name: stanford
data_files:
- split: train
path: data/stanford/train-*
- config_name: stories
data_files:
- split: train
path: data/stories/train-*
- config_name: web_samples_v1
data_files:
- split: train
path: data/web_samples_v1/train-*
- config_name: web_samples_v2
data_files:
- split: train
path: data/web_samples_v2/train-*
- config_name: wikihow
data_files:
- split: train
path: data/wikihow/train-*
license: apache-2.0
language:
- en
tags:
- synthetic
---
# Cosmopedia v0.1
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/8a9ZTW8sC4utjEPIrZegN.png" alt="Cosmopedia v0.1" width="600" height="300">
<p><em>Image generated by DALL-E, the <a href="https://huggingface.co/datasets/HuggingFaceTB/miscellaneous/blob/main/cosmopedia_dalle_prompt_by_mixtral.txt">prompt</a> was generated by Mixtral-8x7B-Instruct-v0.1</em></p>
</center>
**Note: Cosmopedia v0.2 is available at [smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus)**
```
User: What do you think "Cosmopedia" could mean? Hint: in our case it's not related to cosmology.
Mixtral-8x7B-Instruct-v0.1: A possible meaning for "Cosmopedia" could be an encyclopedia or collection of information about
different cultures, societies, and topics from around the world, emphasizing diversity and global connectedness.
```
**Cosmopedia** is a dataset of synthetic textbooks, blogposts, stories, posts and WikiHow articles generated by [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).The dataset contains over **30 million files** and **25 billion tokens**, making it the largest open synthetic dataset to date.
It covers a variety of topics; we tried to map world knowledge present in Web datasets like [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) and [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T), and generate synthetic content that covers them. This is the v0.1 of Cosmopedia, with ample room for improvement and topics to be more comprehensively covered. We hope this dataset will help the community's research efforts in the increasingly intriguing domain of synthetic data. You can find a clickable map by Nomic at [https://atlas.nomic.ai/map/cosmopedia](https://atlas.nomic.ai/map/cosmopedia).
This work is inspired by the great work of [Phi1.5](https://huggingface.co/papers/2309.05463). You can find more details about the dataset in our **blog post**: https://huggingface.co/blog/cosmopedia
# TL;DR
This is a synthetic dataset of 30M samples generated by [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). It contains 8 splits depending on the source of the seed samples we use in the prompts, the model is asked to generate content related to them. The splits range from web samples to educational resources like Stanford, OpenStax and KhanAcademy, we also use some instruction-tuning datasets as seed samples for stories.
Here's how you can load a dataset split:
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/cosmopedia", "stories", split="train", num_proc=12)
ds[0]
```
If you want a smaller subset of the dataset check [Cosmopedia-100k](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia-100k). We also trained a 1.8B model on Cosmopedia [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmopedian-1b).
# Dataset splits
The prompts are all based on the concept of using a seed sample (for example an extract from a web page) and asking the model to generate new content (textbook, story, blogpost..) related to that seed sample.
The dataset consist of 8 splits depending on the source of the seed data used in the split. Some seed samples may appear more than once when we ask for a different style (e.g academic textbook vs blogpost) or audience (e.g young children vs college students). For example, each sample in `stanford` was used with 4 different prompt styles and audiences, check the `format` and `audience` columns for more details.
We observed that tailoring the audience and prompt style accordingly significantly enhances diversity; the proportion of duplicates eliminated via MinHash was under 1%.
The graph below shows the distribution of seed datasets, generations formats and audiences in Cosmopedia:
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/V7MGV2OrCfLO5TxKPUXs4.png" alt="distributions" width="1000" height="500">
</center>
Below are the 8 splits:
- `web_samples_v1`: this and `web_samples_v2` are the largest splits (they make up~75% of the dataset), where we use samples from an internal web dataset similar to [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). These samples were selected based on their topic, using a clustering method explained in the section below.
- `web_samples_v2`: similar to `web_samples_v2` using different samples. We call it v2 because we refined the prompts for this split (e.g asking for more depth over breadth in the concepts explanations and requesting the model to not generate a title and introductory sentences, which might be redundant across samples).
- `stanford`: we scraped course outlines from [stanford.edu](https://explorecourses.stanford.edu/search?q=all%20courses), and each time we prompt the model with one of the course units.
- `stories`: we generated stories to add some commonsense and day-to-day knowledge aspect to the dataset. For this split we use samples from [UltraChat](https://huggingface.co/datasets/stingning/ultrachat) -only questions about the world [subset](https://huggingface.co/datasets/loubnabnl/ultrachat_questions_about_world)- and [OpenHermes2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5). These are synthetic instruction-tuning datasets that are already curated
and cover a wide range of topics.
- `wikihow`: in this split, we asked the model to generate WikiHow articles from WikiHow titles that we scraped, the list is avilable [here](https://github.com/huggingface/cosmopedia/blob/main/prompts/wikihow/wikihowcom-20231012-titles.txt). Note that you can find more WikiHow articles in the other splits by looking for it in the `format` column.
- `openstax`: we scraped course outlines with unit introductions from [OpenStax](https://openstax.org/), a resource suggested by [AFAIK](https://afaik.io/) team.
- `khanacademy`: we scraped the outlines for the courses on [KhanAcademy](https://www.khanacademy.org), and asked the model to genrate a textbook for each.
- `automathtext`: to improve the science knowledge of the model, we use samples from [AutoMathText](https://huggingface.co/datasets/math-ai/AutoMathText/) dataset as seed samples. The dataset covers more than just math. See this clustering [plot](https://huggingface.co/datasets/HuggingFaceTB/miscellaneous/blob/main/AMT_plots/topics_distpng.png) we made.
### Dataset features
The dataset has the following features:
- prompt: the prompt we used to generate the content with Mixtral-8x7B-Instruct-v0.1.
- text: the synthetic generated content.
- seed_data: the prompts include some text fromanother dataset/an external source, `seed_data` is the name of that dataset (e.g web, Stanford courses...)
- token_length: the number of tokens in `text`, computed using [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)'s tokenizer
- format: the style of `text`, this can for example be a textbook, a blogpost, a story.. It can also be inferred from the prompt.
- audience: the target audience defined in the prompt
# Dataset creation
The "Dataset splits" section already provides an overview of the data creation pipeline. In this section, we will explain the topic clustering method for web samples and our iterative process for refining the prompts, in addition to decontamination.
### Topic clustering
Our goal was to generate a vast quantity of synthetic data covering a wide range of topics (essentially, anything useful found on the web) in a cleaner format like textbooks. A natural strategy was to begin with web samples, using them as seeds for the generation.
This approach, employed by Li et al. in [Phi-1.5](https://huggingface.co/papers/2309.05463), appears to be the most scalable method for synthetic data generation, given the availability of web datasets with trillions of tokens.
The prompted model will use an extract from these seed samples as a reference for generation, so the topic might matter more than the actual content of the file. To filter out less relevant topics and to provide the model with context for generating content, we first clustered millions of files from a web dataset.
Then we prompted Mixtral 8x7B with extracts from 10 random samples in each cluster and asked it to find the topic they have in common and to provide an educational score for that topic. The dataset with clusters and topics is available in this [demo](https://huggingface.co/spaces/HuggingFaceTB/inspect_web_clusters), the code is available in [text-clustering]( https://github.com/huggingface/text-clustering ) and a [demo](https://huggingface.co/spaces/HuggingFaceTB/inspect_web_clusters) for inspection.
The educational score seems to work for "very uneducational" topics like adult content and "highly educational" topics like College Mathematics, but isn't very relevant in-between. So we manually inspect the 145 clusters we find, and discard 35 of them. The final list of topics is available [here](https://github.com/huggingface/cosmopedia/blob/dd5cd1f7fcfae255c9cfbe704ba2187965523457/prompts/web_samples/filter_and_classify_clusters.py#L8).
We don't do any further filtering inside the clusters but we include the topic of the sample in the prompt 100% of the time for `web_samples_v1`, but only 50% of the time in `web_samples_v2`, where we tried to refine the prompts, in case the topic isn't accurate or the topic list isn't comprehensive.
Below are the clusters found in Cosmopedia:
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/jMKGaE_UnEfH3j8iZYXVN.png" alt="Cosmopedia clusters" width="1200" height="750">
<p><em>Cosmopedia clusters.</em></p>
</center>
### Diversity
We find that when using the same seed sample multiple times, changing the generation style and/or the audience and their target format results in different generations, covering the same topic from different angles. For example when asking the model for a children's textbook, we needed to remind it that it can't use complex concepts and that the tone should be adapted to children. The same goes when asking for textbooks for college students vs for researchers, we had to emphasize the level of depth we wanted for each, and how acadmeic the textbooks should be.
By carefully iterating on the prompts using [HuggingChat](https://huggingface.co/chat/) and then generating few hundreds samples, we managed to reduce the redundancy. For example, we noticed that the model always started the stories with "Once upon a time" and the forums posts with "A few years back", asking it to explicitly avoid these sentences when starting the generation results in more diverse beginnings (don't worry "Once upon a time" still appears in stories!). Same goes for blogposts and textbooks where the introductory sentences were initially repetitive.
Running MinHash deduplication on the splits detects less than 1% of the files as duplicates.
### Decontamination
Given how we generate synthetic content, there is a possibility that the seed samples or the model's training data could have benchmarks contamination. Therefore, we run a decontamination piepline to make sure we don't have any samples from the test benchmarks in our dataset.
We use a 10-gram overlap to retrieve potentially contaminated samples, similarly to [Phi-1](https://huggingface.co/papers/2306.11644).
After retrieving the candidates, we run a diff between the dataset sample and the benchmark sample using `difflib.SequenceMatcher` and discard the sample if `len(matched_substrings)/len(benchmark_sample) > 0.5`.
We run decontamination against all the benchmarks we evaluated the Cosmo-1B model on: MMLU, HellaSwag, PIQA, SIQA, Winogrande, OpenBookQA, ARC-easy, ARC-challenge.
We report the number of contaminated samples removed from each dataset split, as well as the number of unique benchmark samples that they correspond to (in brackets):
| Dataset group | ARC Easy | ARC Challenge | BoolQ | HellaSwag | MMLU | OpenBookQA | PIQA | WinoGrande |
|-----------------------------------------------|----------|---------------|----------------|-----------|------|------------|------|------------|
| web_samples_v1 + web_samples_v2 + stanford + openstax | 30 (13) | 19 (3) | 386 (41) | 6 (5) | 1 (1) | 0 (0) | 5 (3) | 0 (0) |
| auto_math_text + khanacademy | 4 (4) | 13 (2) | 34 (7) | 1 (1) | 0 (0) | 0 (0) | 0 (0) | 0 (0) |
| stories | 33 (20) | 20 (12) | 27 (21) | 3 (3) | 1 (1) | 2 (2) | 6 (4) | 3 (2) |
## Code
The code for topic clustering of the web samples, building the prompts, content generation and data deduplication & decontamination can be found in the [Cosmopedia GitHub repository](https://github.com/huggingface/cosmopedia).
## Citation
```
@software{benallal2024cosmopedia,
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
title = {Cosmopedia},
month = February,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceTB/cosmopedia}
}
``` |
facebook/multilingual_librispeech | facebook | 2024-08-12T16:50:57Z | 6,300 | 131 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:original",
"language:de",
"language:nl",
"language:fr",
"language:it",
"language:es",
"language:pt",
"language:pl",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2012.03411",
"region:us"
] | [
"automatic-speech-recognition",
"text-to-speech",
"text-to-audio"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- de
- nl
- fr
- it
- es
- pt
- pl
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-to-speech
- text-to-audio
paperswithcode_id: multilingual-librispeech
pretty_name: MultiLingual LibriSpeech
dataset_info:
- config_name: dutch
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 199959986
num_examples: 3095
- name: test
num_bytes: 199298575
num_examples: 3075
- name: train
num_bytes: 23931679031
num_examples: 374287
- name: 9_hours
num_bytes: 139884664.668
num_examples: 2153
- name: 1_hours
num_bytes: 15462181
num_examples: 234
download_size: 24376256629
dataset_size: 24486284437.668
- config_name: french
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 157923970.696
num_examples: 2416
- name: test
num_bytes: 158352158.582
num_examples: 2426
- name: train
num_bytes: 16984935842.04
num_examples: 258213
- name: 9_hours
num_bytes: 142796680.609
num_examples: 2167
- name: 1_hours
num_bytes: 15675831
num_examples: 241
download_size: 17381581776
dataset_size: 17459684482.927002
- config_name: german
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 224293581.302
num_examples: 3469
- name: test
num_bytes: 225756069.096
num_examples: 3394
- name: train
num_bytes: 31050881388
num_examples: 469942
- name: 9_hours
num_bytes: 142777983.118
num_examples: 2194
- name: 1_hours
num_bytes: 15714704
num_examples: 241
download_size: 31526161821
dataset_size: 31659423725.516
- config_name: italian
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 81607596.048
num_examples: 1248
- name: test
num_bytes: 83216752.046
num_examples: 1262
- name: train
num_bytes: 3896742625
num_examples: 59623
- name: 9_hours
num_bytes: 141671904.428
num_examples: 2173
- name: 1_hours
num_bytes: 15560398
num_examples: 240
download_size: 4200633596
dataset_size: 4218799275.522
- config_name: polish
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 32746725
num_examples: 512
- name: test
num_bytes: 33735044
num_examples: 520
- name: train
num_bytes: 1638889846
num_examples: 25043
- name: 9_hours
num_bytes: 142005461
num_examples: 2173
- name: 1_hours
num_bytes: 15681216
num_examples: 238
download_size: 1855342312
dataset_size: 1863058292
- config_name: portuguese
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 57533473
num_examples: 826
- name: test
num_bytes: 59141979
num_examples: 871
- name: train
num_bytes: 2518553713.946
num_examples: 37533
- name: 9_hours
num_bytes: 141641902.42
num_examples: 2116
- name: 1_hours
num_bytes: 15697139
num_examples: 236
download_size: 2780836500
dataset_size: 2792568207.366
- config_name: spanish
features:
- name: audio
dtype: audio
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: dev
num_bytes: 157804903.144
num_examples: 2408
- name: test
num_bytes: 158526899.32
num_examples: 2385
- name: train
num_bytes: 14562584188
num_examples: 220701
- name: 9_hours
num_bytes: 142473624.48
num_examples: 2110
- name: 1_hours
num_bytes: 15702048
num_examples: 233
download_size: 14971394533
dataset_size: 15037091662.944
configs:
- config_name: dutch
data_files:
- split: dev
path: dutch/dev-*
- split: test
path: dutch/test-*
- split: train
path: dutch/train-*
- split: 9_hours
path: dutch/9_hours-*
- split: 1_hours
path: dutch/1_hours-*
- config_name: french
data_files:
- split: dev
path: french/dev-*
- split: test
path: french/test-*
- split: train
path: french/train-*
- split: 9_hours
path: french/9_hours-*
- split: 1_hours
path: french/1_hours-*
- config_name: german
data_files:
- split: dev
path: german/dev-*
- split: test
path: german/test-*
- split: train
path: german/train-*
- split: 9_hours
path: german/9_hours-*
- split: 1_hours
path: german/1_hours-*
- config_name: italian
data_files:
- split: dev
path: italian/dev-*
- split: test
path: italian/test-*
- split: train
path: italian/train-*
- split: 9_hours
path: italian/9_hours-*
- split: 1_hours
path: italian/1_hours-*
- config_name: polish
data_files:
- split: dev
path: polish/dev-*
- split: test
path: polish/test-*
- split: train
path: polish/train-*
- split: 9_hours
path: polish/9_hours-*
- split: 1_hours
path: polish/1_hours-*
- config_name: portuguese
data_files:
- split: dev
path: portuguese/dev-*
- split: test
path: portuguese/test-*
- split: train
path: portuguese/train-*
- split: 9_hours
path: portuguese/9_hours-*
- split: 1_hours
path: portuguese/1_hours-*
- config_name: spanish
data_files:
- split: dev
path: spanish/dev-*
- split: test
path: spanish/test-*
- split: train
path: spanish/train-*
- split: 9_hours
path: spanish/9_hours-*
- split: 1_hours
path: spanish/1_hours-*
---
# Dataset Card for MultiLingual LibriSpeech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MultiLingual LibriSpeech ASR corpus](http://www.openslr.org/94)
- **Repository:** [Needs More Information]
- **Paper:** [MLS: A Large-Scale Multilingual Dataset for Speech Research](https://arxiv.org/abs/2012.03411)
- **Leaderboard:** [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=facebook%2Fmultilingual_librispeech&only_verified=0&task=automatic-speech-recognition&config=-unspecified-&split=-unspecified-&metric=wer)
### Dataset Summary
This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It includes about 44.5K hours of English and a total of about 6K hours for other languages.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
### Languages
The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
```python
from datasets import load_dataset
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
print(next(iter(mls)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
dataloader = DataLoader(mls, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on MultiLingual Librispeech with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'file': '10900_6473_000030.flac',
'audio': {'path': '10900_6473_000030.flac',
'array': array([-1.52587891e-04, 6.10351562e-05, 0.00000000e+00, ...,
4.27246094e-04, 5.49316406e-04, 4.57763672e-04]),
'sampling_rate': 16000},
'text': 'więc czego chcecie odemnie spytałem wysłuchawszy tego zadziwiającego opowiadania broń nas stary człowieku broń zakrzyknęli równocześnie obaj posłowie\n',
'speaker_id': 10900,
'chapter_id': 6473,
'id': '10900_6473_000030'}
```
### Data Fields
- file: A filename .flac format.
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
| Number of samples | Train | Train.9h | Train.1h | Dev | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
| french | 258213 | 2167 | 241 | 2416 | 2426 |
| spanish | 220701 | 2110 | 233 | 2408 | 2385 |
| italian | 59623 | 2173 | 240 | 1248 | 1262 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
### Data Statistics
| Duration (h) | Train | Dev | Test |
|--------------|-----------|-------|-------|
| English | 44,659.74 | 15.75 | 15.55 |
| German | 1,966.51 | 14.28 | 14.29 |
| Dutch | 1,554.24 | 12.76 | 12.76 |
| French | 1,076.58 | 10.07 | 10.07 |
| Spanish | 917.68 | 9.99 | 10 |
| Italian | 247.38 | 5.18 | 5.27 |
| Portuguese | 160.96 | 3.64 | 3.74 |
| Polish | 103.65 | 2.08 | 2.14 |
| # Speakers | Train | | Dev | | Test | |
|------------|-------|------|-----|----|------|----|
| Gender | M | F | M | F | M | F |
| English | 2742 | 2748 | 21 | 21 | 21 | 21 |
| German | 81 | 95 | 15 | 15 | 15 | 15 |
| Dutch | 9 | 31 | 3 | 3 | 3 | 3 |
| French | 62 | 80 | 9 | 9 | 9 | 9 |
| Spanish | 36 | 50 | 10 | 10 | 10 | 10 |
| Italian | 22 | 43 | 5 | 5 | 5 | 5 |
| Portuguese | 26 | 16 | 5 | 5 | 5 | 5 |
| Polish | 6 | 5 | 2 | 2 | 2 | 2 |
| # Hours / Gender | Dev | | Test | |
|------------------|------|------|------|------|
| Gender | M | F | M | F |
| English | 7.76 | 7.99 | 7.62 | 7.93 |
| German | 7.06 | 7.22 | 7 | 7.29 |
| Dutch | 6.44 | 6.32 | 6.72 | 6.04 |
| French | 5.13 | 4.94 | 5.04 | 5.02 |
| Spanish | 4.91 | 5.08 | 4.78 | 5.23 |
| Italian | 2.5 | 2.68 | 2.38 | 2.9 |
| Portuguese | 1.84 | 1.81 | 1.83 | 1.9 |
| Polish | 1.12 | 0.95 | 1.09 | 1.05 |
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset. |
turkish-nlp-suite/InstrucTurca | turkish-nlp-suite | 2024-08-12T10:13:04Z | 254 | 25 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"language:tr",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"text2text-generation",
"question-answering",
"conversational"
] | 2024-07-28T10:53:18Z | 2 | ---
name: InstrucTurca
tagline: Diverse Instructional Content Dataset for Turkish
description: >
InstrucTurca is a rich Turkish instruction tuning dataset from
various fields. It includes tasks, code, poems, math, essays, medical texts,
and more. This diversity makes it ideal for various NLP tasks such as
summarization, question answering, generation, translation, and
classification. However the main usage is intended to instruction train
Turkish LLMs.
dataset_version: 1.0.0
authors:
- name: Duygu Altinok
email: [email protected]
license: cc-by-sa-4.0
task_categories:
- text-generation
- text2text-generation
- question-answering
- conversational
language:
- tr
pretty_name: InstructTurca
size_categories:
- 1M<n<10M
---

InstrucTurca v1.0.0 is a diverse synthetic instruction tuning dataset crafted for instruction-tuning Turkish LLMs. The data is compiled data various English datasets and sources, such as code instructions, poems, summarized texts, medical texts, and more.
## Dataset content
* BI55/MedText
* checkai/instruction-poems
* garage-bAInd/Open-Platypus
* Locutusque/ColumnedChatCombined
* nampdn-ai/tiny-codes
* Open-Orca/OpenOrca
* pubmed_qa
* TIGER-Lab/MathInstruct
* TokenBender/code_instructions_120k_alpaca_style
Each dataset instance is a pair of text representing user input and bot output.
## Dataset creation
All the data is translated by Snowflake Artic Instruct. We used a very basic prompt and asked Arctic to translate both the input and output text. After that, we made a pass over the dataset to eliminate hallucination looking translations. More info about the creation process can be found at our [Medium article](https://medium.com/google-developer-experts/instructurca-an-open-source-instruction-tuning-dataset-for-turkish-18c37b0e99b9).
## Licence
The dataset is suitable for commercial use, thanks to [Snowflake Arctic's Apache 2.0 licence](https://github.com/Snowflake-Labs/snowflake-arctic/blob/main/LICENSE).
## Acknowledgements:
Huge thanks go to original dataset creators. Even a bigger thanks go to Snowflake team to creating a truly open source LLM! Without any of them, this dataset wouldn't be possible.
## Citation
If you find this work helpful, feel free to give us a cite.
```
@misc{instructurca,
author={Duygu Altinok},
title={InstrucTurca: A Diverse Instructional Content Dataset for Turkish},
year={2024}
}
``` |
deepset/covid_qa_deepset | deepset | 2024-08-08T06:11:28Z | 294 | 8 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
- extractive-qa
pretty_name: COVID-QA
dataset_info:
config_name: covid_qa_deepset
features:
- name: document_id
dtype: int32
- name: context
dtype: string
- name: question
dtype: string
- name: is_impossible
dtype: bool
- name: id
dtype: int32
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 65151242
num_examples: 2019
download_size: 2274275
dataset_size: 65151242
configs:
- config_name: covid_qa_deepset
data_files:
- split: train
path: covid_qa_deepset/train-*
default: true
---
# Dataset Card for COVID-QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/deepset-ai/COVID-QA
- **Paper:** https://openreview.net/forum?id=JENSKEEzsoU
- **Point of Contact:** [deepset AI](https://github.com/deepset-ai)
### Dataset Summary
COVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.
A total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
**What do the instances that comprise the dataset represent?**
Each represents a question, a context (document passage from the CORD19 dataset) and an answer.
**How many instances are there in total?**
2019 instances
**What data does each instance consist of?**
Each instance is a question, a set of answers, and an id associated with each answer.
[More Information Needed]
### Data Fields
The data was annotated in SQuAD style fashion, where each row contains:
* **question**: Query question
* **context**: Context text to obtain the answer from
* **document_id** The document ID of the context text
* **answer**: Dictionary containing the answer string and the start index
### Data Splits
**data/COVID-QA.json**: 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The inital data collected comes from 147 scientific articles from the CORD-19 dataset. Question and answers were then
annotated afterwards.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
While annotators were volunteers, they were required to have at least a Master’s degree in biomedical sciences.
The annotation team was led by a medical doctor (G.A.R.) who vetted the volunteer’s credentials and
manually verified each question/answer pair produced. We used an existing, web-based annotation tool that had been
created by deepset and is available at their Neural Search framework [haystack](https://github.com/deepset-ai/haystack).
#### Who are the annotators?
The annotators are 15 volunteer biomedical experts on scientific articles related to COVID-19.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset aims to help build question answering models serving clinical and scientific researchers, public health authorities, and frontline workers.
These QA systems can help them find answers and patterns in research papers by locating relevant answers to common questions from scientific articles.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
The listed authors in the homepage are maintaining/supporting the dataset.
### Dataset Curators
[More Information Needed]
### Licensing Information
The Proto_qa dataset is licensed under the [Apache License 2.0](https://github.com/deepset-ai/COVID-QA/blob/master/LICENSE)
### Citation Information
```
@inproceedings{moller2020covid,
title={COVID-QA: A Question Answering Dataset for COVID-19},
author={M{\"o}ller, Timo and Reina, Anthony and Jayakumar, Raghavan and Pietsch, Malte},
booktitle={Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020},
year={2020}
}
```
### Contributions
Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset. |
sonos-nlu-benchmark/snips_built_in_intents | sonos-nlu-benchmark | 2024-08-08T06:03:47Z | 515 | 11 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1805.10190",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
paperswithcode_id: snips
pretty_name: SNIPS Natural Language Understanding benchmark
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ComparePlaces
'1': RequestRide
'2': GetWeather
'3': SearchPlace
'4': GetPlaceDetails
'5': ShareCurrentLocation
'6': GetTrafficInformation
'7': BookRestaurant
'8': GetDirections
'9': ShareETA
splits:
- name: train
num_bytes: 19427
num_examples: 328
download_size: 11158
dataset_size: 19427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
train_split: train
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for Snips Built In Intents
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents
- **Repository:** https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents
- **Paper:** https://arxiv.org/abs/1805.10190
- **Point of Contact:** The Snips team has joined Sonos in November 2019. These open datasets remain available and their access is now managed by the Sonos Voice Experience Team. Please email [email protected] with any question.
### Dataset Summary
Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at
https://github.com/sonos/nlu-benchmark in folder 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes.
A related Medium post is https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d.
### Supported Tasks and Leaderboards
There are no related shared tasks that we are aware of.
### Languages
English
## Dataset Structure
### Data Instances
The dataset contains 328 utterances over 10 intent classes. Each sample looks like:
`{'label': 8, 'text': 'Transit directions to Barcelona Pizza.'}`
### Data Fields
- `text`: The text utterance expressing some user intent.
- `label`: The intent label of the piece of text utterance.
### Data Splits
The source data is not split.
## Dataset Creation
### Curation Rationale
The dataset was originally created to compare the performance of a number of voice assistants. However, the labelled utterances are useful
for developing and benchmarking text chatbots as well.
### Source Data
#### Initial Data Collection and Normalization
It is not clear how the data was collected. From the Medium post: `The benchmark relies on a set of 328 queries built by the business team
at Snips, and kept secret from data scientists and engineers throughout the development of the solution.`
#### Who are the source language producers?
Originally prepared by snips.ai. The Snips team has since joined Sonos in November 2019. These open datasets remain available and their
access is now managed by the Sonos Voice Experience Team. Please email [email protected] with any question.
### Annotations
#### Annotation process
It is not clear how the data was collected. From the Medium post: `The benchmark relies on a set of 328 queries built by the business team
at Snips, and kept secret from data scientists and engineers throughout the development of the solution.`
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Originally prepared by snips.ai. The Snips team has since joined Sonos in November 2019. These open datasets remain available and their
access is now managed by the Sonos Voice Experience Team. Please email [email protected] with any question.
### Licensing Information
The source data is licensed under Creative Commons Zero v1.0 Universal.
### Citation Information
Any publication based on these datasets must include a full citation to the following paper in which the results were published by the Snips Team:
Coucke A. et al., "Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces." CoRR 2018,
https://arxiv.org/abs/1805.10190
### Contributions
Thanks to [@bduvenhage](https://github.com/bduvenhage) for adding this dataset. |
KorQuAD/squad_kor_v1 | KorQuAD | 2024-08-08T06:03:11Z | 1,166 | 28 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ko",
"license:cc-by-nd-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1909.07005",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ko
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: korquad
pretty_name: The Korean Question Answering Dataset
dataset_info:
config_name: squad_kor_v1
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 83380141
num_examples: 60407
- name: validation
num_bytes: 8261701
num_examples: 5774
download_size: 12713412
dataset_size: 91641842
configs:
- config_name: squad_kor_v1
data_files:
- split: train
path: squad_kor_v1/train-*
- split: validation
path: squad_kor_v1/validation-*
default: true
---
# Dataset Card for KorQuAD v1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://korquad.github.io/KorQuad%201.0/
- **Repository:** https://github.com/korquad/korquad.github.io/tree/master/dataset
- **Paper:** https://arxiv.org/abs/1909.07005
### Dataset Summary
KorQuAD 1.0 is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD v1.0 to meet the standard.
### Supported Tasks and Leaderboards
`question-answering`
### Languages
Korean
## Dataset Structure
Follows the standars SQuAD format.
### Data Instances
An example from the data set looks as follows:
```
{'answers': {'answer_start': [54], 'text': ['교향곡']},
'context': '1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이 끌려 이를 소재로 해서 하나의 교향곡을 쓰려는 뜻을 갖는다. 이 시기 바그너는 1838년에 빛 독촉으로 산전수전을 다 걲은 상황이라 좌절과 실망에 가득했으며 메피스토펠레스를 만나는 파우스트의 심경에 공감했다고 한다. 또한 파리에서 아브네크의 지휘로 파리 음악원 관현악단이 연주하는 베토벤의 교향곡 9번을 듣고 깊은 감명을 받았는데, 이것이 이듬해 1월에 파우스트의 서곡으로 쓰여진 이 작품에 조금이라도 영향을 끼쳤으리라는 것은 의심할 여지가 없다. 여기의 라단조 조성의 경우에도 그의 전기에 적혀 있는 것처럼 단순한 정신적 피로나 실의가 반영된 것이 아니라 베토벤의 합창교향곡 조성의 영향을 받은 것을 볼 수 있다. 그렇게 교향곡 작곡을 1839년부터 40년에 걸쳐 파리에서 착수했으나 1악장을 쓴 뒤에 중단했다. 또한 작품의 완성과 동시에 그는 이 서곡(1악장)을 파리 음악원의 연주회에서 연주할 파트보까지 준비하였으나, 실제로는 이루어지지는 않았다. 결국 초연은 4년 반이 지난 후에 드레스덴에서 연주되었고 재연도 이루어졌지만, 이후에 그대로 방치되고 말았다. 그 사이에 그는 리엔치와 방황하는 네덜란드인을 완성하고 탄호이저에도 착수하는 등 분주한 시간을 보냈는데, 그런 바쁜 생활이 이 곡을 잊게 한 것이 아닌가 하는 의견도 있다.',
'id': '6566495-0-0',
'question': '바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?',
'title': '파우스트_서곡'}
```
### Data Fields
```
{'id': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'context': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}
```
### Data Splits
- Train: 60407
- Validation: 5774
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Wikipedia
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-ND 2.0 KR](https://creativecommons.org/licenses/by-nd/2.0/kr/deed.en)
### Citation Information
```
@article{lim2019korquad1,
title={Korquad1. 0: Korean qa dataset for machine reading comprehension},
author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},
journal={arXiv preprint arXiv:1909.07005},
year={2019}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
s3prl/superb | s3prl | 2024-08-08T06:02:44Z | 1,280 | 30 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"task_ids:speaker-identification",
"task_ids:audio-intent-classification",
"task_ids:audio-emotion-recognition",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|other-librimix",
"source_datasets:extended|other-speech_commands",
"language:en",
"license:unknown",
"arxiv:2105.01051",
"region:us",
"query-by-example-spoken-term-detection",
"audio-slot-filling",
"speaker-diarization",
"automatic-speaker-verification"
] | [
"automatic-speech-recognition",
"audio-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
- extended|librispeech_asr
- extended|other-librimix
- extended|other-speech_commands
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- keyword-spotting
- speaker-identification
- audio-intent-classification
- audio-emotion-recognition
pretty_name: SUPERB
tags:
- query-by-example-spoken-term-detection
- audio-slot-filling
- speaker-diarization
- automatic-speaker-verification
dataset_info:
- config_name: asr
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 11852430
num_examples: 28539
- name: validation
num_bytes: 897213
num_examples: 2703
- name: test
num_bytes: 871234
num_examples: 2620
download_size: 7071899769
dataset_size: 13620877
- config_name: sd
features:
- name: record_id
dtype: string
- name: file
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: speakers
list:
- name: speaker_id
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
splits:
- name: train
num_bytes: 4622013
num_examples: 13901
- name: dev
num_bytes: 860472
num_examples: 3014
- name: test
num_bytes: 847803
num_examples: 3002
download_size: 7190370211
dataset_size: 6330288
- config_name: ks
features:
- name: file
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': _silence_
'11': _unknown_
splits:
- name: train
num_bytes: 8467781
num_examples: 51094
- name: validation
num_bytes: 1126476
num_examples: 6798
- name: test
num_bytes: 510619
num_examples: 3081
download_size: 1560367713
dataset_size: 10104876
- config_name: ic
features:
- name: file
dtype: string
- name: speaker_id
dtype: string
- name: text
dtype: string
- name: action
dtype:
class_label:
names:
'0': activate
'1': bring
'2': change language
'3': deactivate
'4': decrease
'5': increase
- name: object
dtype:
class_label:
names:
'0': Chinese
'1': English
'2': German
'3': Korean
'4': heat
'5': juice
'6': lamp
'7': lights
'8': music
'9': newspaper
'10': none
'11': shoes
'12': socks
'13': volume
- name: location
dtype:
class_label:
names:
'0': bedroom
'1': kitchen
'2': none
'3': washroom
splits:
- name: train
num_bytes: 7071466
num_examples: 23132
- name: validation
num_bytes: 953622
num_examples: 3118
- name: test
num_bytes: 1158347
num_examples: 3793
download_size: 1544093324
dataset_size: 9183435
- config_name: si
features:
- name: file
dtype: string
- name: label
dtype:
class_label:
names:
'0': id10001
'1': id10002
'2': id10003
'3': id10004
'4': id10005
'5': id10006
'6': id10007
'7': id10008
'8': id10009
'9': id10010
'10': id10011
'11': id10012
'12': id10013
'13': id10014
'14': id10015
'15': id10016
'16': id10017
'17': id10018
'18': id10019
'19': id10020
'20': id10021
'21': id10022
'22': id10023
'23': id10024
'24': id10025
'25': id10026
'26': id10027
'27': id10028
'28': id10029
'29': id10030
'30': id10031
'31': id10032
'32': id10033
'33': id10034
'34': id10035
'35': id10036
'36': id10037
'37': id10038
'38': id10039
'39': id10040
'40': id10041
'41': id10042
'42': id10043
'43': id10044
'44': id10045
'45': id10046
'46': id10047
'47': id10048
'48': id10049
'49': id10050
'50': id10051
'51': id10052
'52': id10053
'53': id10054
'54': id10055
'55': id10056
'56': id10057
'57': id10058
'58': id10059
'59': id10060
'60': id10061
'61': id10062
'62': id10063
'63': id10064
'64': id10065
'65': id10066
'66': id10067
'67': id10068
'68': id10069
'69': id10070
'70': id10071
'71': id10072
'72': id10073
'73': id10074
'74': id10075
'75': id10076
'76': id10077
'77': id10078
'78': id10079
'79': id10080
'80': id10081
'81': id10082
'82': id10083
'83': id10084
'84': id10085
'85': id10086
'86': id10087
'87': id10088
'88': id10089
'89': id10090
'90': id10091
'91': id10092
'92': id10093
'93': id10094
'94': id10095
'95': id10096
'96': id10097
'97': id10098
'98': id10099
'99': id10100
'100': id10101
'101': id10102
'102': id10103
'103': id10104
'104': id10105
'105': id10106
'106': id10107
'107': id10108
'108': id10109
'109': id10110
'110': id10111
'111': id10112
'112': id10113
'113': id10114
'114': id10115
'115': id10116
'116': id10117
'117': id10118
'118': id10119
'119': id10120
'120': id10121
'121': id10122
'122': id10123
'123': id10124
'124': id10125
'125': id10126
'126': id10127
'127': id10128
'128': id10129
'129': id10130
'130': id10131
'131': id10132
'132': id10133
'133': id10134
'134': id10135
'135': id10136
'136': id10137
'137': id10138
'138': id10139
'139': id10140
'140': id10141
'141': id10142
'142': id10143
'143': id10144
'144': id10145
'145': id10146
'146': id10147
'147': id10148
'148': id10149
'149': id10150
'150': id10151
'151': id10152
'152': id10153
'153': id10154
'154': id10155
'155': id10156
'156': id10157
'157': id10158
'158': id10159
'159': id10160
'160': id10161
'161': id10162
'162': id10163
'163': id10164
'164': id10165
'165': id10166
'166': id10167
'167': id10168
'168': id10169
'169': id10170
'170': id10171
'171': id10172
'172': id10173
'173': id10174
'174': id10175
'175': id10176
'176': id10177
'177': id10178
'178': id10179
'179': id10180
'180': id10181
'181': id10182
'182': id10183
'183': id10184
'184': id10185
'185': id10186
'186': id10187
'187': id10188
'188': id10189
'189': id10190
'190': id10191
'191': id10192
'192': id10193
'193': id10194
'194': id10195
'195': id10196
'196': id10197
'197': id10198
'198': id10199
'199': id10200
'200': id10201
'201': id10202
'202': id10203
'203': id10204
'204': id10205
'205': id10206
'206': id10207
'207': id10208
'208': id10209
'209': id10210
'210': id10211
'211': id10212
'212': id10213
'213': id10214
'214': id10215
'215': id10216
'216': id10217
'217': id10218
'218': id10219
'219': id10220
'220': id10221
'221': id10222
'222': id10223
'223': id10224
'224': id10225
'225': id10226
'226': id10227
'227': id10228
'228': id10229
'229': id10230
'230': id10231
'231': id10232
'232': id10233
'233': id10234
'234': id10235
'235': id10236
'236': id10237
'237': id10238
'238': id10239
'239': id10240
'240': id10241
'241': id10242
'242': id10243
'243': id10244
'244': id10245
'245': id10246
'246': id10247
'247': id10248
'248': id10249
'249': id10250
'250': id10251
'251': id10252
'252': id10253
'253': id10254
'254': id10255
'255': id10256
'256': id10257
'257': id10258
'258': id10259
'259': id10260
'260': id10261
'261': id10262
'262': id10263
'263': id10264
'264': id10265
'265': id10266
'266': id10267
'267': id10268
'268': id10269
'269': id10270
'270': id10271
'271': id10272
'272': id10273
'273': id10274
'274': id10275
'275': id10276
'276': id10277
'277': id10278
'278': id10279
'279': id10280
'280': id10281
'281': id10282
'282': id10283
'283': id10284
'284': id10285
'285': id10286
'286': id10287
'287': id10288
'288': id10289
'289': id10290
'290': id10291
'291': id10292
'292': id10293
'293': id10294
'294': id10295
'295': id10296
'296': id10297
'297': id10298
'298': id10299
'299': id10300
'300': id10301
'301': id10302
'302': id10303
'303': id10304
'304': id10305
'305': id10306
'306': id10307
'307': id10308
'308': id10309
'309': id10310
'310': id10311
'311': id10312
'312': id10313
'313': id10314
'314': id10315
'315': id10316
'316': id10317
'317': id10318
'318': id10319
'319': id10320
'320': id10321
'321': id10322
'322': id10323
'323': id10324
'324': id10325
'325': id10326
'326': id10327
'327': id10328
'328': id10329
'329': id10330
'330': id10331
'331': id10332
'332': id10333
'333': id10334
'334': id10335
'335': id10336
'336': id10337
'337': id10338
'338': id10339
'339': id10340
'340': id10341
'341': id10342
'342': id10343
'343': id10344
'344': id10345
'345': id10346
'346': id10347
'347': id10348
'348': id10349
'349': id10350
'350': id10351
'351': id10352
'352': id10353
'353': id10354
'354': id10355
'355': id10356
'356': id10357
'357': id10358
'358': id10359
'359': id10360
'360': id10361
'361': id10362
'362': id10363
'363': id10364
'364': id10365
'365': id10366
'366': id10367
'367': id10368
'368': id10369
'369': id10370
'370': id10371
'371': id10372
'372': id10373
'373': id10374
'374': id10375
'375': id10376
'376': id10377
'377': id10378
'378': id10379
'379': id10380
'380': id10381
'381': id10382
'382': id10383
'383': id10384
'384': id10385
'385': id10386
'386': id10387
'387': id10388
'388': id10389
'389': id10390
'390': id10391
'391': id10392
'392': id10393
'393': id10394
'394': id10395
'395': id10396
'396': id10397
'397': id10398
'398': id10399
'399': id10400
'400': id10401
'401': id10402
'402': id10403
'403': id10404
'404': id10405
'405': id10406
'406': id10407
'407': id10408
'408': id10409
'409': id10410
'410': id10411
'411': id10412
'412': id10413
'413': id10414
'414': id10415
'415': id10416
'416': id10417
'417': id10418
'418': id10419
'419': id10420
'420': id10421
'421': id10422
'422': id10423
'423': id10424
'424': id10425
'425': id10426
'426': id10427
'427': id10428
'428': id10429
'429': id10430
'430': id10431
'431': id10432
'432': id10433
'433': id10434
'434': id10435
'435': id10436
'436': id10437
'437': id10438
'438': id10439
'439': id10440
'440': id10441
'441': id10442
'442': id10443
'443': id10444
'444': id10445
'445': id10446
'446': id10447
'447': id10448
'448': id10449
'449': id10450
'450': id10451
'451': id10452
'452': id10453
'453': id10454
'454': id10455
'455': id10456
'456': id10457
'457': id10458
'458': id10459
'459': id10460
'460': id10461
'461': id10462
'462': id10463
'463': id10464
'464': id10465
'465': id10466
'466': id10467
'467': id10468
'468': id10469
'469': id10470
'470': id10471
'471': id10472
'472': id10473
'473': id10474
'474': id10475
'475': id10476
'476': id10477
'477': id10478
'478': id10479
'479': id10480
'480': id10481
'481': id10482
'482': id10483
'483': id10484
'484': id10485
'485': id10486
'486': id10487
'487': id10488
'488': id10489
'489': id10490
'490': id10491
'491': id10492
'492': id10493
'493': id10494
'494': id10495
'495': id10496
'496': id10497
'497': id10498
'498': id10499
'499': id10500
'500': id10501
'501': id10502
'502': id10503
'503': id10504
'504': id10505
'505': id10506
'506': id10507
'507': id10508
'508': id10509
'509': id10510
'510': id10511
'511': id10512
'512': id10513
'513': id10514
'514': id10515
'515': id10516
'516': id10517
'517': id10518
'518': id10519
'519': id10520
'520': id10521
'521': id10522
'522': id10523
'523': id10524
'524': id10525
'525': id10526
'526': id10527
'527': id10528
'528': id10529
'529': id10530
'530': id10531
'531': id10532
'532': id10533
'533': id10534
'534': id10535
'535': id10536
'536': id10537
'537': id10538
'538': id10539
'539': id10540
'540': id10541
'541': id10542
'542': id10543
'543': id10544
'544': id10545
'545': id10546
'546': id10547
'547': id10548
'548': id10549
'549': id10550
'550': id10551
'551': id10552
'552': id10553
'553': id10554
'554': id10555
'555': id10556
'556': id10557
'557': id10558
'558': id10559
'559': id10560
'560': id10561
'561': id10562
'562': id10563
'563': id10564
'564': id10565
'565': id10566
'566': id10567
'567': id10568
'568': id10569
'569': id10570
'570': id10571
'571': id10572
'572': id10573
'573': id10574
'574': id10575
'575': id10576
'576': id10577
'577': id10578
'578': id10579
'579': id10580
'580': id10581
'581': id10582
'582': id10583
'583': id10584
'584': id10585
'585': id10586
'586': id10587
'587': id10588
'588': id10589
'589': id10590
'590': id10591
'591': id10592
'592': id10593
'593': id10594
'594': id10595
'595': id10596
'596': id10597
'597': id10598
'598': id10599
'599': id10600
'600': id10601
'601': id10602
'602': id10603
'603': id10604
'604': id10605
'605': id10606
'606': id10607
'607': id10608
'608': id10609
'609': id10610
'610': id10611
'611': id10612
'612': id10613
'613': id10614
'614': id10615
'615': id10616
'616': id10617
'617': id10618
'618': id10619
'619': id10620
'620': id10621
'621': id10622
'622': id10623
'623': id10624
'624': id10625
'625': id10626
'626': id10627
'627': id10628
'628': id10629
'629': id10630
'630': id10631
'631': id10632
'632': id10633
'633': id10634
'634': id10635
'635': id10636
'636': id10637
'637': id10638
'638': id10639
'639': id10640
'640': id10641
'641': id10642
'642': id10643
'643': id10644
'644': id10645
'645': id10646
'646': id10647
'647': id10648
'648': id10649
'649': id10650
'650': id10651
'651': id10652
'652': id10653
'653': id10654
'654': id10655
'655': id10656
'656': id10657
'657': id10658
'658': id10659
'659': id10660
'660': id10661
'661': id10662
'662': id10663
'663': id10664
'664': id10665
'665': id10666
'666': id10667
'667': id10668
'668': id10669
'669': id10670
'670': id10671
'671': id10672
'672': id10673
'673': id10674
'674': id10675
'675': id10676
'676': id10677
'677': id10678
'678': id10679
'679': id10680
'680': id10681
'681': id10682
'682': id10683
'683': id10684
'684': id10685
'685': id10686
'686': id10687
'687': id10688
'688': id10689
'689': id10690
'690': id10691
'691': id10692
'692': id10693
'693': id10694
'694': id10695
'695': id10696
'696': id10697
'697': id10698
'698': id10699
'699': id10700
'700': id10701
'701': id10702
'702': id10703
'703': id10704
'704': id10705
'705': id10706
'706': id10707
'707': id10708
'708': id10709
'709': id10710
'710': id10711
'711': id10712
'712': id10713
'713': id10714
'714': id10715
'715': id10716
'716': id10717
'717': id10718
'718': id10719
'719': id10720
'720': id10721
'721': id10722
'722': id10723
'723': id10724
'724': id10725
'725': id10726
'726': id10727
'727': id10728
'728': id10729
'729': id10730
'730': id10731
'731': id10732
'732': id10733
'733': id10734
'734': id10735
'735': id10736
'736': id10737
'737': id10738
'738': id10739
'739': id10740
'740': id10741
'741': id10742
'742': id10743
'743': id10744
'744': id10745
'745': id10746
'746': id10747
'747': id10748
'748': id10749
'749': id10750
'750': id10751
'751': id10752
'752': id10753
'753': id10754
'754': id10755
'755': id10756
'756': id10757
'757': id10758
'758': id10759
'759': id10760
'760': id10761
'761': id10762
'762': id10763
'763': id10764
'764': id10765
'765': id10766
'766': id10767
'767': id10768
'768': id10769
'769': id10770
'770': id10771
'771': id10772
'772': id10773
'773': id10774
'774': id10775
'775': id10776
'776': id10777
'777': id10778
'778': id10779
'779': id10780
'780': id10781
'781': id10782
'782': id10783
'783': id10784
'784': id10785
'785': id10786
'786': id10787
'787': id10788
'788': id10789
'789': id10790
'790': id10791
'791': id10792
'792': id10793
'793': id10794
'794': id10795
'795': id10796
'796': id10797
'797': id10798
'798': id10799
'799': id10800
'800': id10801
'801': id10802
'802': id10803
'803': id10804
'804': id10805
'805': id10806
'806': id10807
'807': id10808
'808': id10809
'809': id10810
'810': id10811
'811': id10812
'812': id10813
'813': id10814
'814': id10815
'815': id10816
'816': id10817
'817': id10818
'818': id10819
'819': id10820
'820': id10821
'821': id10822
'822': id10823
'823': id10824
'824': id10825
'825': id10826
'826': id10827
'827': id10828
'828': id10829
'829': id10830
'830': id10831
'831': id10832
'832': id10833
'833': id10834
'834': id10835
'835': id10836
'836': id10837
'837': id10838
'838': id10839
'839': id10840
'840': id10841
'841': id10842
'842': id10843
'843': id10844
'844': id10845
'845': id10846
'846': id10847
'847': id10848
'848': id10849
'849': id10850
'850': id10851
'851': id10852
'852': id10853
'853': id10854
'854': id10855
'855': id10856
'856': id10857
'857': id10858
'858': id10859
'859': id10860
'860': id10861
'861': id10862
'862': id10863
'863': id10864
'864': id10865
'865': id10866
'866': id10867
'867': id10868
'868': id10869
'869': id10870
'870': id10871
'871': id10872
'872': id10873
'873': id10874
'874': id10875
'875': id10876
'876': id10877
'877': id10878
'878': id10879
'879': id10880
'880': id10881
'881': id10882
'882': id10883
'883': id10884
'884': id10885
'885': id10886
'886': id10887
'887': id10888
'888': id10889
'889': id10890
'890': id10891
'891': id10892
'892': id10893
'893': id10894
'894': id10895
'895': id10896
'896': id10897
'897': id10898
'898': id10899
'899': id10900
'900': id10901
'901': id10902
'902': id10903
'903': id10904
'904': id10905
'905': id10906
'906': id10907
'907': id10908
'908': id10909
'909': id10910
'910': id10911
'911': id10912
'912': id10913
'913': id10914
'914': id10915
'915': id10916
'916': id10917
'917': id10918
'918': id10919
'919': id10920
'920': id10921
'921': id10922
'922': id10923
'923': id10924
'924': id10925
'925': id10926
'926': id10927
'927': id10928
'928': id10929
'929': id10930
'930': id10931
'931': id10932
'932': id10933
'933': id10934
'934': id10935
'935': id10936
'936': id10937
'937': id10938
'938': id10939
'939': id10940
'940': id10941
'941': id10942
'942': id10943
'943': id10944
'944': id10945
'945': id10946
'946': id10947
'947': id10948
'948': id10949
'949': id10950
'950': id10951
'951': id10952
'952': id10953
'953': id10954
'954': id10955
'955': id10956
'956': id10957
'957': id10958
'958': id10959
'959': id10960
'960': id10961
'961': id10962
'962': id10963
'963': id10964
'964': id10965
'965': id10966
'966': id10967
'967': id10968
'968': id10969
'969': id10970
'970': id10971
'971': id10972
'972': id10973
'973': id10974
'974': id10975
'975': id10976
'976': id10977
'977': id10978
'978': id10979
'979': id10980
'980': id10981
'981': id10982
'982': id10983
'983': id10984
'984': id10985
'985': id10986
'986': id10987
'987': id10988
'988': id10989
'989': id10990
'990': id10991
'991': id10992
'992': id10993
'993': id10994
'994': id10995
'995': id10996
'996': id10997
'997': id10998
'998': id10999
'999': id11000
'1000': id11001
'1001': id11002
'1002': id11003
'1003': id11004
'1004': id11005
'1005': id11006
'1006': id11007
'1007': id11008
'1008': id11009
'1009': id11010
'1010': id11011
'1011': id11012
'1012': id11013
'1013': id11014
'1014': id11015
'1015': id11016
'1016': id11017
'1017': id11018
'1018': id11019
'1019': id11020
'1020': id11021
'1021': id11022
'1022': id11023
'1023': id11024
'1024': id11025
'1025': id11026
'1026': id11027
'1027': id11028
'1028': id11029
'1029': id11030
'1030': id11031
'1031': id11032
'1032': id11033
'1033': id11034
'1034': id11035
'1035': id11036
'1036': id11037
'1037': id11038
'1038': id11039
'1039': id11040
'1040': id11041
'1041': id11042
'1042': id11043
'1043': id11044
'1044': id11045
'1045': id11046
'1046': id11047
'1047': id11048
'1048': id11049
'1049': id11050
'1050': id11051
'1051': id11052
'1052': id11053
'1053': id11054
'1054': id11055
'1055': id11056
'1056': id11057
'1057': id11058
'1058': id11059
'1059': id11060
'1060': id11061
'1061': id11062
'1062': id11063
'1063': id11064
'1064': id11065
'1065': id11066
'1066': id11067
'1067': id11068
'1068': id11069
'1069': id11070
'1070': id11071
'1071': id11072
'1072': id11073
'1073': id11074
'1074': id11075
'1075': id11076
'1076': id11077
'1077': id11078
'1078': id11079
'1079': id11080
'1080': id11081
'1081': id11082
'1082': id11083
'1083': id11084
'1084': id11085
'1085': id11086
'1086': id11087
'1087': id11088
'1088': id11089
'1089': id11090
'1090': id11091
'1091': id11092
'1092': id11093
'1093': id11094
'1094': id11095
'1095': id11096
'1096': id11097
'1097': id11098
'1098': id11099
'1099': id11100
'1100': id11101
'1101': id11102
'1102': id11103
'1103': id11104
'1104': id11105
'1105': id11106
'1106': id11107
'1107': id11108
'1108': id11109
'1109': id11110
'1110': id11111
'1111': id11112
'1112': id11113
'1113': id11114
'1114': id11115
'1115': id11116
'1116': id11117
'1117': id11118
'1118': id11119
'1119': id11120
'1120': id11121
'1121': id11122
'1122': id11123
'1123': id11124
'1124': id11125
'1125': id11126
'1126': id11127
'1127': id11128
'1128': id11129
'1129': id11130
'1130': id11131
'1131': id11132
'1132': id11133
'1133': id11134
'1134': id11135
'1135': id11136
'1136': id11137
'1137': id11138
'1138': id11139
'1139': id11140
'1140': id11141
'1141': id11142
'1142': id11143
'1143': id11144
'1144': id11145
'1145': id11146
'1146': id11147
'1147': id11148
'1148': id11149
'1149': id11150
'1150': id11151
'1151': id11152
'1152': id11153
'1153': id11154
'1154': id11155
'1155': id11156
'1156': id11157
'1157': id11158
'1158': id11159
'1159': id11160
'1160': id11161
'1161': id11162
'1162': id11163
'1163': id11164
'1164': id11165
'1165': id11166
'1166': id11167
'1167': id11168
'1168': id11169
'1169': id11170
'1170': id11171
'1171': id11172
'1172': id11173
'1173': id11174
'1174': id11175
'1175': id11176
'1176': id11177
'1177': id11178
'1178': id11179
'1179': id11180
'1180': id11181
'1181': id11182
'1182': id11183
'1183': id11184
'1184': id11185
'1185': id11186
'1186': id11187
'1187': id11188
'1188': id11189
'1189': id11190
'1190': id11191
'1191': id11192
'1192': id11193
'1193': id11194
'1194': id11195
'1195': id11196
'1196': id11197
'1197': id11198
'1198': id11199
'1199': id11200
'1200': id11201
'1201': id11202
'1202': id11203
'1203': id11204
'1204': id11205
'1205': id11206
'1206': id11207
'1207': id11208
'1208': id11209
'1209': id11210
'1210': id11211
'1211': id11212
'1212': id11213
'1213': id11214
'1214': id11215
'1215': id11216
'1216': id11217
'1217': id11218
'1218': id11219
'1219': id11220
'1220': id11221
'1221': id11222
'1222': id11223
'1223': id11224
'1224': id11225
'1225': id11226
'1226': id11227
'1227': id11228
'1228': id11229
'1229': id11230
'1230': id11231
'1231': id11232
'1232': id11233
'1233': id11234
'1234': id11235
'1235': id11236
'1236': id11237
'1237': id11238
'1238': id11239
'1239': id11240
'1240': id11241
'1241': id11242
'1242': id11243
'1243': id11244
'1244': id11245
'1245': id11246
'1246': id11247
'1247': id11248
'1248': id11249
'1249': id11250
'1250': id11251
splits:
- name: train
num_bytes: 12729268
num_examples: 138361
- name: validation
num_bytes: 635172
num_examples: 6904
- name: test
num_bytes: 759096
num_examples: 8251
download_size: 0
dataset_size: 14123536
---
# Dataset Card for SUPERB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://superbbenchmark.org](http://superbbenchmark.org)
- **Repository:** [https://github.com/s3prl/s3prl](https://github.com/s3prl/s3prl)
- **Paper:** [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [Lewis Tunstall](mailto:[email protected]) and [Albert Villanova](mailto:[email protected])
### Dataset Summary
SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data.
### Supported Tasks and Leaderboards
The SUPERB leaderboard can be found here https://superbbenchmark.org/leaderboard and consists of the following tasks:
#### pr
Phoneme Recognition (PR) transcribes an utterance into the smallest content units. This task includes alignment modeling to avoid potentially inaccurate forced alignment. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/dev-clean/test-clean subsets are adopted in SUPERB for training/validation/testing. Phoneme transcriptions are obtained from the LibriSpeech official g2p-model-5 and the conversion script in Kaldi librispeech s5 recipe. The evaluation metric is phone error rate (PER).
#### asr
Automatic Speech Recognition (ASR) transcribes utterances into words. While PR analyzes the improvement in modeling phonetics, ASR reflects the significance of the improvement in a real-world scenario. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/devclean/test-clean subsets are used for training/validation/testing. The evaluation metric is word error rate (WER).
#### ks
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. The evaluation metric is accuracy (ACC)
##### Example of usage:
Use these auxillary functions to:
- load the audio file into an audio data array
- sample from long `_silence_` audio clips
For other examples of handling long `_silence_` clips see the [S3PRL](https://github.com/s3prl/s3prl/blob/099ce807a6ffa6bf2482ceecfcaf83dea23da355/s3prl/downstream/speech_commands/dataset.py#L80)
or [TFDS](https://github.com/tensorflow/datasets/blob/6b8cfdb7c3c0a04e731caaa8660ce948d0a67b1e/tensorflow_datasets/audio/speech_commands.py#L143) implementations.
```python
def map_to_array(example):
import soundfile as sf
speech_array, sample_rate = sf.read(example["file"])
example["speech"] = speech_array
example["sample_rate"] = sample_rate
return example
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
#### qbe
Query by Example Spoken Term Detection (QbE) detects a spoken term (query) in an audio database (documents) by binary discriminating a given pair of query and document into a match or not. The English subset in [QUESST 2014 challenge](https://github.com/s3prl/s3prl/tree/master/downstream#qbe-query-by-example-spoken-term-detection) is adopted since we focus on investigating English as the first step. The evaluation metric is maximum term weighted value (MTWV) which balances misses and false alarms.
#### ic
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands dataset](https://github.com/s3prl/s3prl/tree/master/downstream#ic-intent-classification---fluent-speech-commands), where each utterance is tagged with three intent labels: action, object, and location. The evaluation metric is accuracy (ACC).
#### sf
Slot Filling (SF) predicts a sequence of semantic slot-types from an utterance, like a slot-type FromLocation for a spoken word Taipei, which is known as a slot-value. Both slot-types and slot-values are essential for an SLU system to function. The evaluation metrics thus include slot-type F1 score and slotvalue CER. [Audio SNIPS](https://github.com/s3prl/s3prl/tree/master/downstream#sf-end-to-end-slot-filling) is adopted, which synthesized multi-speaker utterances for SNIPS. Following the standard split in SNIPS, US-accent speakers are further selected for training, and others are for validation/testing.
#### si
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) is adopted, and the evaluation metric is accuracy (ACC).
#### asv
Automatic Speaker Verification (ASV) verifies whether the speakers of a pair of utterances match as a binary classification, and speakers in the testing set may not appear in the training set. Thus, ASV is more challenging than SID. VoxCeleb1 is used without VoxCeleb2 training data and noise augmentation. The evaluation metric is equal error rate (EER).
#### sd
Speaker Diarization (SD) predicts *who is speaking when* for each timestamp, and multiple speakers can speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be able to represent mixtures of signals. [LibriMix](https://github.com/s3prl/s3prl/tree/master/downstream#sd-speaker-diarization) is adopted where LibriSpeech train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing. We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER).
##### Example of usage
Use these auxiliary functions to:
- load the audio file into an audio data array
- generate the label array
```python
def load_audio_file(example, frame_shift=160):
import soundfile as sf
example["array"], example["sample_rate"] = sf.read(
example["file"], start=example["start"] * frame_shift, stop=example["end"] * frame_shift
)
return example
def generate_label(example, frame_shift=160, num_speakers=2, rate=16000):
import numpy as np
start = example["start"]
end = example["end"]
frame_num = end - start
speakers = sorted({speaker["speaker_id"] for speaker in example["speakers"]})
label = np.zeros((frame_num, num_speakers), dtype=np.int32)
for speaker in example["speakers"]:
speaker_index = speakers.index(speaker["speaker_id"])
start_frame = np.rint(speaker["start"] * rate / frame_shift).astype(int)
end_frame = np.rint(speaker["end"] * rate / frame_shift).astype(int)
rel_start = rel_end = None
if start <= start_frame < end:
rel_start = start_frame - start
if start < end_frame <= end:
rel_end = end_frame - start
if rel_start is not None or rel_end is not None:
label[rel_start:rel_end, speaker_index] = 1
example["label"] = label
return example
```
#### er
Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset [IEMOCAP](https://github.com/s3prl/s3prl/tree/master/downstream#er-emotion-recognition) is adopted, and we follow the conventional evaluation protocol: we drop the unbalance emotion classes to leave the final four classes with a similar amount of data points and cross-validates on five folds of the standard splits. The evaluation metric is accuracy (ACC).
### Languages
The language data in SUPERB is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
An example from each split looks like:
```python
{'chapter_id': 1240,
'file': 'path/to/file.flac',
'audio': {'path': 'path/to/file.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '103-1240-0000',
'speaker_id': 103,
'text': 'CHAPTER ONE MISSUS RACHEL LYNDE IS SURPRISED MISSUS RACHEL LYNDE '
'LIVED JUST WHERE THE AVONLEA MAIN ROAD DIPPED DOWN INTO A LITTLE '
'HOLLOW FRINGED WITH ALDERS AND LADIES EARDROPS AND TRAVERSED BY A '
'BROOK'}
```
#### ks
An example from each split looks like:
```python
{
'file': '/path/yes/af7a8296_nohash_1.wav',
'audio': {'path': '/path/yes/af7a8296_nohash_1.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'label': 0 # 'yes'
}
```
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
```python
{
'file': "/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav",
'audio': {'path': '/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'speaker_id': '2BqVo8kVB2Skwgyb',
'text': 'Turn the bedroom lights off',
'action': 3, # 'deactivate'
'object': 7, # 'lights'
'location': 0 # 'bedroom'
}
```
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
```python
{
'file': '/path/wav/id10003/na8-QEFmj44/00003.wav',
'audio': {'path': '/path/wav/id10003/na8-QEFmj44/00003.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'label': 2 # 'id10003'
}
```
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
An example from each split looks like:
```python
{
'record_id': '1578-6379-0038_6415-111615-0009',
'file': 'path/to/file.wav',
'audio': {'path': 'path/to/file.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'start': 0,
'end': 1590,
'speakers': [
{'speaker_id': '1578', 'start': 28, 'end': 657},
{'speaker_id': '6415', 'start': 28, 'end': 1576}
]
}
```
#### er
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
####Note abouth the `audio` fields
When accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text` (`string`): The transcription of the audio file.
- `speaker_id` (`integer`): A unique ID of the speaker. The same speaker id can be found for multiple data samples.
- `chapter_id` (`integer`): ID of the audiobook chapter which includes the transcription.
- `id` (`string`): A unique ID of the data sample.
#### ks
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label of the spoken command. Possible values:
- `0: "yes", 1: "no", 2: "up", 3: "down", 4: "left", 5: "right", 6: "on", 7: "off", 8: "stop", 9: "go", 10: "_silence_", 11: "_unknown_"`
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `speaker_id` (`string`): ID of the speaker.
- `text` (`string`): Transcription of the spoken command.
- `action` (`ClassLabel`): Label of the command's action. Possible values:
- `0: "activate", 1: "bring", 2: "change language", 3: "deactivate", 4: "decrease", 5: "increase"`
- `object` (`ClassLabel`): Label of the command's object. Possible values:
- `0: "Chinese", 1: "English", 2: "German", 3: "Korean", 4: "heat", 5: "juice", 6: "lamp", 7: "lights", 8: "music", 9: "newspaper", 10: "none", 11: "shoes", 12: "socks", 13: "volume"`
- `location` (`ClassLabel`): Label of the command's location. Possible values:
- `0: "bedroom", 1: "kitchen", 2: "none", 3: "washroom"`
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label (ID) of the speaker. Possible values:
- `0: "id10001", 1: "id10002", 2: "id10003", ..., 1250: "id11251"`
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
The data fields in all splits are:
- `record_id` (`string`): ID of the record.
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `start` (`integer`): Start frame of the audio.
- `end` (`integer`): End frame of the audio.
- `speakers` (`list` of `dict`): List of speakers in the audio. Each item contains the fields:
- `speaker_id` (`string`): ID of the speaker.
- `start` (`integer`): Frame when the speaker starts speaking.
- `end` (`integer`): Frame when the speaker stops speaking.
#### er
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label of the speech emotion. Possible values:
- `0: "neu", 1: "hap", 2: "ang", 3: "sad"`
### Data Splits
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
| | train | validation | test |
|-----|------:|-----------:|-----:|
| asr | 28539 | 2703 | 2620 |
#### ks
| | train | validation | test |
|----|------:|-----------:|-----:|
| ks | 51094 | 6798 | 3081 |
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
| | train | validation | test |
|----|------:|-----------:|-----:|
| ic | 23132 | 3118 | 3793 |
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
| | train | validation | test |
|----|-------:|-----------:|-----:|
| si | 138361 | 6904 | 8251 |
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
The data is split into "train", "dev" and "test" sets, each containing the following number of examples:
| | train | dev | test |
|----|------:|-----:|-----:|
| sd | 13901 | 3014 | 3002 |
#### er
The data is split into 5 sets intended for 5-fold cross-validation:
| | session1 | session2 | session3 | session4 | session5 |
|----|---------:|---------:|---------:|---------:|---------:|
| er | 1085 | 1023 | 1151 | 1031 | 1241 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### pr and asr
The license for Librispeech is the Creative Commons Attribution 4.0 International license ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/]).
#### ks
The license for Speech Commands is [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode)
#### qbe
The license for QUESST 2014 is not known.
#### ic
The license for Fluent Speech Commands dataset is the [Fluent Speech Commands Public License](https://fluent.ai/wp-content/uploads/2021/04/Fluent_Speech_Commands_Public_License.pdf)
#### sf
The license for Audio SNIPS dataset is not known.
#### si and asv
The license for VoxCeleb1 dataset is the Creative Commons Attribution 4.0 International license ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)).
#### sd
LibriMix is based on the LibriSpeech (see above) and Wham! noises datasets. The Wham! noises dataset is distributed under the Attribution-NonCommercial 4.0 International ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)) license.
#### er
The IEMOCAP license is ditributed under [its own license](https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf).
### Citation Information
```
@article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Note that each SUPERB dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) and [@anton-l](https://github.com/anton-l) for adding this dataset. |
fthbrmnby/turkish_product_reviews | fthbrmnby | 2024-08-08T05:57:51Z | 99 | 10 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:tr",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Turkish Product Reviews
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 43369614
num_examples: 235165
download_size: 24354762
dataset_size: 43369614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Turkish Product Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** [Fatih Barmanbay](https://github.com/fthbrmnby)
### Dataset Summary
This Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
**Example 1:**
**sentence:** beklentimin altında bir ürün kaliteli değil
**sentiment:** 0 (negative)
**Example 2:**
**sentence:** fiyat ve performans olarak gayet iyi
**sentiment:** 1 (positive)
### Data Fields
- **sentence**(string) : Contatins turkish product review
- **sentiment**(int) : 0 (negative) or 1 (positive)
### Data Splits
It is not divided into Train set and Test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by [Fatih Barmanbay](https://github.com/fthbrmnby).
### Licensing Information
The data is under the [CC-BY-SA-4.0 License](https://github.com/fthbrmnby/turkish-text-data/blob/master/LICENCE)
### Citation Information
No citation available for this dataset.
### Contributions
Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset. |
ccdv/arxiv-classification | ccdv | 2024-08-08T05:50:57Z | 652 | 23 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"long context"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
language: en
size_categories: 10K<n<100K
task_categories:
- text-classification
task_ids:
- multi-class-classification
- topic-classification
tags:
- long context
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': math.AC
'1': cs.CV
'2': cs.AI
'3': cs.SY
'4': math.GR
'5': cs.CE
'6': cs.PL
'7': cs.IT
'8': cs.DS
'9': cs.NE
'10': math.ST
splits:
- name: train
num_bytes: 1650691690
num_examples: 28388
- name: validation
num_bytes: 144469423
num_examples: 2500
- name: test
num_bytes: 141183324
num_examples: 2500
download_size: 1011043995
dataset_size: 1936344437
- config_name: no_ref
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': math.AC
'1': cs.CV
'2': cs.AI
'3': cs.SY
'4': math.GR
'5': cs.CE
'6': cs.PL
'7': cs.IT
'8': cs.DS
'9': cs.NE
'10': math.ST
splits:
- name: train
num_bytes: 1650587077
num_examples: 28388
- name: validation
num_bytes: 144460101
num_examples: 2500
- name: test
num_bytes: 141174323
num_examples: 2500
download_size: 1010911333
dataset_size: 1936221501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: no_ref
data_files:
- split: train
path: no_ref/train-*
- split: validation
path: no_ref/validation-*
- split: test
path: no_ref/test-*
---
**Arxiv Classification: a classification of Arxiv Papers (11 classes).**
This dataset is intended for long context classification (documents have all > 4k tokens). \
Copied from "Long Document Classification From Local Word Glimpses via Recurrent Attention Learning"
```
@ARTICLE{8675939,
author={He, Jun and Wang, Liqun and Liu, Liu and Feng, Jiao and Wu, Hao},
journal={IEEE Access},
title={Long Document Classification From Local Word Glimpses via Recurrent Attention Learning},
year={2019},
volume={7},
number={},
pages={40707-40718},
doi={10.1109/ACCESS.2019.2907992}
}
```
* See: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8675939
* See: https://github.com/LiqunW/Long-document-dataset
It contains 11 slightly unbalanced classes, 33k Arxiv Papers divided into 3 splits: train (28k), val (2.5k) and test (2.5k).
2 configs:
* default
* no_ref, removes references to the class inside the document (eg: [cs.LG] -> [])
Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script:
```
export MODEL_NAME=roberta-base
export MAX_SEQ_LENGTH=512
python run_glue.py \
--model_name_or_path $MODEL_NAME \
--dataset_name ccdv/arxiv-classification \
--do_train \
--do_eval \
--max_seq_length $MAX_SEQ_LENGTH \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 4 \
--learning_rate 2e-5 \
--num_train_epochs 1 \
--max_eval_samples 500 \
--output_dir tmp/arxiv
``` |
ccdv/arxiv-summarization | ccdv | 2024-08-08T05:49:50Z | 2,683 | 90 | [
"task_categories:summarization",
"task_categories:text-generation",
"multilinguality:monolingual",
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"conditional-text-generation"
] | [
"summarization",
"text-generation"
] | 2022-03-02T23:29:22Z | null | ---
language:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
dataset_info:
- config_name: document
features:
- name: article
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 7118099012
num_examples: 203037
- name: validation
num_bytes: 216177092
num_examples: 6436
- name: test
num_bytes: 216560120
num_examples: 6440
download_size: 3614462676
dataset_size: 7550836224
- config_name: section
features:
- name: article
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 7201498856
num_examples: 203037
- name: validation
num_bytes: 218793173
num_examples: 6436
- name: test
num_bytes: 219196348
num_examples: 6440
download_size: 3649452807
dataset_size: 7639488377
configs:
- config_name: document
data_files:
- split: train
path: document/train-*
- split: validation
path: document/validation-*
- split: test
path: document/test-*
- config_name: section
data_files:
- split: train
path: section/train-*
- split: validation
path: section/validation-*
- split: test
path: section/test-*
default: true
train-eval-index:
- config: document
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
article: text
abstract: target
---
# Arxiv dataset for summarization
Dataset for summarization of long documents.\
Adapted from this [repo](https://github.com/armancohan/long-summarization).\
Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. \
This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
```python
"ccdv/arxiv-summarization": ("article", "abstract")
```
### Data Fields
- `id`: paper id
- `article`: a string containing the body of the paper
- `abstract`: a string containing the abstract of the paper
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. \
Token counts are white space based.
| Dataset Split | Number of Instances | Avg. tokens |
| ------------- | --------------------|:----------------------|
| Train | 203,037 | 6038 / 299 |
| Validation | 6,436 | 5894 / 172 |
| Test | 6,440 | 5905 / 174 |
# Cite original article
```
@inproceedings{cohan-etal-2018-discourse,
title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents",
author = "Cohan, Arman and
Dernoncourt, Franck and
Kim, Doo Soon and
Bui, Trung and
Kim, Seokhwan and
Chang, Walter and
Goharian, Nazli",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2097",
doi = "10.18653/v1/N18-2097",
pages = "615--621",
abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.",
}
```
|
microsoft/cats_vs_dogs | microsoft | 2024-08-08T05:35:11Z | 1,810 | 44 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: cats-vs-dogs
pretty_name: Cats Vs. Dogs
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 667071605.79
num_examples: 23410
download_size: 721642420
dataset_size: 667071605.79
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Cats Vs. Dogs
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Cats vs Dogs Dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765)
- **Repository:**
- **Paper:** [Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/CCS2007.pdf)
- **Leaderboard:** [Dogs vs. Cats](https://www.kaggle.com/competitions/dogs-vs-cats)
- **Point of Contact:**
### Dataset Summary
A large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset.
From the competition page:
> The Asirra data set
>
> Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a [CAPTCHA](http://www.captcha.net/) (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords.
>
> Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface:
>
> Asirra is unique because of its partnership with [Petfinder.com](https://www.petfinder.com/), the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cats-vs-dogs).
### Languages
English.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x29CEAD71780>,
'labels': 0
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```
{
"cat": 0,
"dog": 1,
}
```
### Data Splits
| | train |
|---------------|------:|
| # of examples | 23410 |
## Dataset Creation
### Curation Rationale
This subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA:
From the competition page:
> Image recognition attacks
>
> While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459.
### Source Data
#### Initial Data Collection and Normalization
This dataset is a subset of the Asirra dataset.
From the competition page:
> Asirra is unique because of its partnership with Petfinder.com, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States.
#### Who are the source language producers?
The users of [Petfinder.com](https://www.petfinder.com/).
### Annotations
#### Annotation process
The images were annotated by selecting a pet category on [Petfinder.com](https://www.petfinder.com/).
#### Who are the annotators?
The users of [Petfinder.com](https://www.petfinder.com/).
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> Unlike many image-based CAPTCHAs which are abstract or subjective, Asirra’s challenges are concrete, inoffensive (cute, by some accounts), require no specialized or culturally biased knowledge, and have definite ground truth. This
makes Asirra less frustrating for humans. Some beta-testers found it fun. The four-year-old child of one asked several times to “play the cat and dog game again.”
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@Inproceedings (Conference){asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization,
author = {Elson, Jeremy and Douceur, John (JD) and Howell, Jon and Saul, Jared},
title = {Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization},
booktitle = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
year = {2007},
month = {October},
publisher = {Association for Computing Machinery, Inc.},
url = {https://www.microsoft.com/en-us/research/publication/asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization/},
edition = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset. |
applied-ai-018/pretraining_v1-omega_books | applied-ai-018 | 2024-08-05T19:01:31Z | 37,194 | 1 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-07-31T08:53:54Z | null | ---
dataset_info:
config_name: CC-MAIN-2013-20
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 235476901236
num_examples: 51901183
download_size: 138494178972
dataset_size: 235476901236
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
---
|
fixie-ai/librispeech_asr | fixie-ai | 2024-08-05T18:38:33Z | 12,668 | 3 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-07-19T02:34:30Z | null | ---
language:
- en
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 623948478.48
num_examples: 2620
- name: validation
num_bytes: 622190064.956
num_examples: 2703
- name: train.360
num_bytes: 41953890926.124
num_examples: 104014
- name: train.100
num_bytes: 11606313661.774
num_examples: 28539
download_size: 53886816833
dataset_size: 54806343131.334
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: train.500
num_bytes: 57330687390.808
num_examples: 148688
- name: validation
num_bytes: 591511495.496
num_examples: 2864
- name: test
num_bytes: 616939198.113
num_examples: 2939
download_size: 57019309170
dataset_size: 58539138084.417
configs:
- config_name: clean
data_files:
- split: test
path: clean/test-*
- split: validation
path: clean/validation-*
- split: train.360
path: clean/train.360-*
- split: train.100
path: clean/train.100-*
- config_name: other
data_files:
- split: train.500
path: other/train.500-*
- split: validation
path: other/validation-*
- split: test
path: other/test-*
---
|
Vezora/Open-Critic-GPT | Vezora | 2024-07-28T21:00:25Z | 119 | 95 | [
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-07-27T21:50:27Z | null | ---
{vezora license}
---
<img src="https://huggingface.co/Vezora/Agent-7b-v1/resolve/main/Designer.png" width="200" height="200" />
# Open-Critic-GPT Dataset
## Overview
**Creator** [Nicolas Mejia-Petit](https://twitter.com/mejia_petit)
[My Kofi](https://ko-fi.com/nicolasmejiapetit)
The Open-Critic-GPT dataset is a synthetic dataset created to train models in both identifying and fixing bugs in code. The dataset is generated using a unique synthetic data pipeline which involves:
1. Prompting a local model with an existing code example.
2. Introducing bugs into the code. While also having the model, from a first-person perspective, find the bugs and explain them.
3. Manipulating the data by shifting around where the broken code and working code is, and removing the # bug// and # error// comments from the code.
This process allows the creation of two distinct datasets within Open-Critic-GPT:
- **Code-Preference-Pairs Dataset**: (SFT) Contains pairs of duplicate code examples, with the only difference being one the rejected example has the bugged code 'surgically transplanted in' while the accepted is left the same.
- **Open-Critic-GPT Dataset**: (DPO) Trains the model to find bugs and produce working code from broken code.
- Both dataset's spans a total of 127 different language/structures, (some may have been lost in conversion started with 122k ended with 55k, due to lack of structured output, a finetuned model may preform better structured outputs.)
- Both datasets contain of ~55K examples each (which both come from the same parent example)
## Dataset Structure
The dataset is organized as follows:
- **Code Examples**: Each code example consists of a a given snippet of bugged code and asked to find the bugs and fix them:
- **Bugged Code**: The version of the code with introduced bugs and no comments, to avoid the model from learning from comments that say "Bug" or "Error".
- **Explanation**: Explanation are provided for each bugged code example, detailing the nature of the bug, what the bug does to the code, and tips to avoid it.
- **Fixed Code**: Lastly the model write the fully working code, with bugs fixed and comments added to the code.
## Usage
- Just give me credit :)
- Oh and current employee's of 'Open AI' and or the company as a whole is NOT permitted use this dataset or any derivative work that may come for training. It is mentioned in the custom apache license.
- Otherwise to everyone else, it falls under Apache 2.0 :).
### Training Models
When training models with the Open-Critic-GPT dataset, it is essential to use a data collator to ensure that the loss is not calculated on the bugged code. The data collator manages the dataset during training to provide the model with the correct inputs and outputs for loss calculation.
### Crediting dataset creators:
- This dataset was created using 'm-a-p/CodeFeedback-Filtered-Instruction' Which contiains data from several different sources
- Here are the orginal authors of the oringal sources, Thank you to the following authors: Nick Roshdieh for evol Instruct, Ajinkya Bawase for Python shareGPT 23k, Intellligent Software Engineering for Magicoder, and Multimodal Art Projection for the compiled and filtered m-a-p/CodeFeedback-Filtered-Instruction.
### Begging for money section.
- I created this dataset off a single 3090. Imagine what I could do with two.
- I can't continue to work on these open source projects, with out receiving a sponsorship or a form of compensation, all the money I make from this will go dirrectly back into helping the open source community.
- If you can, It would mean the world to me any donation helps me release this work for free. thank you :)
- [Kofi](https://ko-fi.com/nicolasmejiapetit) |
google/docci | google | 2024-07-24T16:00:33Z | 176 | 72 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"arxiv:2404.19753",
"region:us"
] | [
"text-to-image",
"image-to-text"
] | 2023-10-31T14:46:10Z | 2 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- en
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: DOCCI
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# Dataset Card for DOCCI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://google.github.io/docci
- **Paper:** [arXiv](https://arxiv.org/pdf/2404.19753)
- **Data Explorer:** [Check images and descriptions](https://google.github.io/docci/viz.html?c=&p=1)
- **Point of Contact:** [email protected]
- **Report an Error:** [Google Forms](https://forms.gle/v8sUoXWHvuqrWyfe9)
### Dataset Summary
DOCCI (Descriptions of Connected and Contrasting Images) is a collection of images paired with detailed descriptions. The descriptions explain the key elements of the images, as well as secondary information such as background, lighting, and settings. The images are specifically taken to help assess the precise visual properties of images. DOCCI also includes many related images that vary in having key differences from the others. All descriptions are manually annotated to ensure they adequately distinguish each image from its counterparts.
### Supported Tasks
Text-to-Image and Image-to-Text generation
### Languages
English
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1536x2048>,
'example_id': 'qual_dev_00000',
'description': 'An indoor angled down medium close-up front view of a real sized stuffed dog with white and black colored fur wearing a blue hard hat with a light on it. A couple inches to the right of the dog is a real sized black and white penguin that is also wearing a blue hard hat with a light on it. The dog is sitting, and is facing slightly towards the right while looking to its right with its mouth slightly open, showing its pink tongue. The dog and penguin are placed on a gray and white carpet, and placed against a white drawer that has a large gray cushion on top of it. Behind the gray cushion is a transparent window showing green trees on the outside.'
}
```
### Data Fields
Name | Explanation
--- | ---
`image` | PIL.JpegImagePlugin.JpegImageFile
`example_id` | The unique ID of an example follows this format: `<SPLIT_NAME>_<EXAMPLE_NUMBER>`.
`description` | Text description of the associated image.
### Data Splits
Dataset | Train | Test | Qual Dev | Qual Test
---| ---: | ---: | ---: | ---:
DOCCI | 9,647 | 5,000 | 100 | 100
DOCCI-AAR | 4,932 | 5,000 | -- | --
## Dataset Creation
### Curation Rationale
DOCCI is designed as an evaluation dataset for both text-to-image (T2I) and image-to-text (I2T) generation. Please see our paper for more details.
### Source Data
#### Initial Data Collection
All images were taken by one of the authors and their family.
### Annotations
#### Annotation process
All text descriptions were written by human annotators.
We do not rely on any automated process in our data annotation pipeline.
Please see Appendix A of [our paper](https://arxiv.org/pdf/2404.19753) for details about image curation.
### Personal and Sensitive Information
We manually reviewed all images for personally identifiable information (PII), removing some images and blurring detected faces, phone numbers, and URLs to protect privacy.
For text descriptions, we instructed annotators to exclude any PII, such as people's names, phone numbers, and URLs.
After the annotation phase, we employed automatic tools to scan for PII, ensuring the descriptions remained free of such information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{OnoeDocci2024,
author = {Yasumasa Onoe and Sunayana Rane and Zachary Berger and Yonatan Bitton and Jaemin Cho and Roopal Garg and
Alexander Ku and Zarana Parekh and Jordi Pont-Tuset and Garrett Tanzer and Su Wang and Jason Baldridge},
title = {{DOCCI: Descriptions of Connected and Contrasting Images}},
booktitle = {ECCV},
year = {2024}
}
``` |
mlfoundations/dclm-baseline-1.0 | mlfoundations | 2024-07-22T15:27:52Z | 1,151,865 | 218 | [
"license:cc-by-4.0",
"arxiv:2406.11794",
"region:us"
] | [] | 2024-06-17T18:57:13Z | null | ---
license: cc-by-4.0
dataset_info:
features:
- name: bff_contained_ngram_count_before_dedupe
dtype: int64
- name: language_id_whole_page_fasttext
struct:
- name: en
dtype: float64
- name: metadata
struct:
- name: Content-Length
dtype: string
- name: Content-Type
dtype: string
- name: WARC-Block-Digest
dtype: string
- name: WARC-Concurrent-To
dtype: string
- name: WARC-Date
dtype: timestamp[s]
- name: WARC-IP-Address
dtype: string
- name: WARC-Identified-Payload-Type
dtype: string
- name: WARC-Payload-Digest
dtype: string
- name: WARC-Record-ID
dtype: string
- name: WARC-Target-URI
dtype: string
- name: WARC-Type
dtype: string
- name: WARC-Warcinfo-ID
dtype: string
- name: WARC-Truncated
dtype: string
- name: previous_word_count
dtype: int64
- name: text
dtype: string
- name: url
dtype: string
- name: warcinfo
dtype: string
- name: fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob
dtype: float64
---
## DCLM-baseline
DCLM-baseline is a 4T token / 3B document pretraining dataset that achieves strong performance on language model benchmarks.
Below are comparisions of model trained on DCLM-baseline with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | ✗ | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | ✗ | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | ✗ | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | ✗ | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | ✗ | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | ✗ | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | ✗ | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | ✓ | 44.1 | 27.4 | 25.1 |
| Amber | 7B | 1.2T | ✓ | 39.8 | 27.9 | 22.3 |
| Crystal | 7B | 1.2T | ✓ | 48.0 | 48.2 | 33.2 |
| OLMo-1.7 | 7B | 2.1T | ✓ | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | ✓ | **50.2** | **57.1** | **40.4** |
| **Models we trained** | | | | | | |
| FineWeb edu | 7B | 0.14T | ✓ | 38.7 | 26.3 | 22.1 |
| FineWeb edu | 7B | 0.28T | ✓ | 41.9 | 37.3 | 24.5 |
| **DCLM-BASELINE** | 7B | 0.14T | ✓ | 44.1 | 38.3 | 25.0 |
| **DCLM-BASELINE** | 7B | 0.28T | ✓ | 48.9 | 50.8 | 31.8 |
| **DCLM-BASELINE** | 7B | 2.6T | ✓ | **57.1** | **63.7** | **45.4** |
## Dataset Details
### Dataset Description
- **Curated by:** The DCLM Team
- **Language(s) (NLP):** English
- **License:** CC-by-4.0
### Dataset Sources
- **Repository:** https://datacomp.ai/dclm
- **Paper:**: https://arxiv.org/abs/2406.11794
- **Construction Code**: https://github.com/mlfoundations/dclm
## Uses
### Direct Use
DCLM-Baseline is intended to be used as a research baseline for the DCLM benchmark. It demonstrates the importance of data curation in training performant language models.
### Out-of-Scope Use
DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math. It may not perform as well as domain-specific datasets for these tasks. Due to these limitations, the dataset is intended for research use only.
DCLM-Baseline is a subset of the DCLM-Pool, which is a corpus of 240 trillion tokens derived from Common Crawl. The dataset is in plain text format.
## Dataset Creation
### Curation Rationale
DCLM-Baseline was created to demonstrate the effectiveness of the DCLM testbed in developing high-quality training sets for language models. It serves as a proof of concept for the data curation strategies enabled by DCLM and is designed to be a research baseline for the benchmark.
### Source Data
#### Data Collection and Processing
DCLM-Baseline was created by applying a series of cleaning, filtering, and deduplication steps to the raw Common Crawl data (DCLM-Pool). The key steps include:
1. Heuristic cleaning and filtering (reproduction of RefinedWeb)
2. Deduplication using a Bloom filter
3. Model-based filtering using a fastText classifier trained on instruction-formatted data (OpenHermes 2.5 and r/ExplainLikeImFive)
#### Who are the source data producers?
The source data is from Common Crawl, which is a repository of web crawl data.
### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
The dataset may contain biases present in the Common Crawl data. The dataset's performance on code and math tasks is limited compared to its performance on language understanding tasks. DCLM-Baseline is designed for research purposes only.
### Recommendations
Users should be aware of the potential biases and limitations of the dataset, especially when using it for specific domains like code and math. The dataset should only be used for research purposes in the context of the DCLM benchmark.
## Citation
```bibtex
@misc{li2024datacomplm,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
year={2024},
eprint={2406.11794},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
```
|
rethinklab/Bench2Drive-Full | rethinklab | 2024-07-22T06:46:56Z | 15,646 | 2 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-05-13T05:56:17Z | null | ---
license: apache-2.0
---
|
mlfoundations/dclm-baseline-1.0-parquet | mlfoundations | 2024-07-19T17:35:58Z | 24,886 | 27 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.11794",
"region:us"
] | [] | 2024-06-30T20:31:14Z | null | ---
language:
- en
license: cc-by-4.0
---
## DCLM-baseline
***Note: this is an identical copy of https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0, where all the files have been mapped to a parquet format.***
DCLM-baseline is a 4T token / 3B document pretraining dataset that achieves strong performance on language model benchmarks.
Below are comparisions of model trained on DCLM-baseline with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | ✗ | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | ✗ | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | ✗ | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | ✗ | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | ✗ | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | ✗ | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | ✗ | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | ✓ | 44.1 | 27.4 | 25.1 |
| Amber | 7B | 1.2T | ✓ | 39.8 | 27.9 | 22.3 |
| Crystal | 7B | 1.2T | ✓ | 48.0 | 48.2 | 33.2 |
| OLMo-1.7 | 7B | 2.1T | ✓ | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | ✓ | **50.2** | **57.1** | **40.4** |
| **Models we trained** | | | | | | |
| FineWeb edu | 7B | 0.14T | ✓ | 38.7 | 26.3 | 22.1 |
| FineWeb edu | 7B | 0.28T | ✓ | 41.9 | 37.3 | 24.5 |
| **DCLM-BASELINE** | 7B | 0.14T | ✓ | 44.1 | 38.3 | 25.0 |
| **DCLM-BASELINE** | 7B | 0.28T | ✓ | 48.9 | 50.8 | 31.8 |
| **DCLM-BASELINE** | 7B | 2.6T | ✓ | **57.1** | **63.7** | **45.4** |
## Dataset Details
### Dataset Description
- **Curated by:** The DCLM Team
- **Language(s) (NLP):** English
- **License:** CC-by-4.0
### Dataset Sources
- **Repository:** https://datacomp.ai/dclm
- **Paper:**: https://arxiv.org/abs/2406.11794
- **Construction Code**: https://github.com/mlfoundations/dclm
## Uses
### Direct Use
DCLM-Baseline is intended to be used as a research baseline for the DCLM benchmark. It demonstrates the importance of data curation in training performant language models.
### Out-of-Scope Use
DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math. It may not perform as well as domain-specific datasets for these tasks. Due to these limitations, the dataset is intended for research use only.
DCLM-Baseline is a subset of the DCLM-Pool, which is a corpus of 240 trillion tokens derived from Common Crawl. The dataset is in plain text format.
## Dataset Creation
### Curation Rationale
DCLM-Baseline was created to demonstrate the effectiveness of the DCLM testbed in developing high-quality training sets for language models. It serves as a proof of concept for the data curation strategies enabled by DCLM and is designed to be a research baseline for the benchmark.
### Source Data
#### Data Collection and Processing
DCLM-Baseline was created by applying a series of cleaning, filtering, and deduplication steps to the raw Common Crawl data (DCLM-Pool). The key steps include:
1. Heuristic cleaning and filtering (reproduction of RefinedWeb)
2. Deduplication using a Bloom filter
3. Model-based filtering using a fastText classifier trained on instruction-formatted data (OpenHermes 2.5 and r/ExplainLikeImFive)
#### Who are the source data producers?
The source data is from Common Crawl, which is a repository of web crawl data.
### Personal and Sensitive Information
[More Information Needed]
## Bias, Risks, and Limitations
The dataset may contain biases present in the Common Crawl data. The dataset's performance on code and math tasks is limited compared to its performance on language understanding tasks. DCLM-Baseline is designed for research purposes only.
### Recommendations
Users should be aware of the potential biases and limitations of the dataset, especially when using it for specific domains like code and math. The dataset should only be used for research purposes in the context of the DCLM benchmark.
## Citation
```bibtex
@misc{li2024datacomplm,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
year={2024},
eprint={2406.11794},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
```
|
TheBritishLibrary/EThOS-PhD-metadata | TheBritishLibrary | 2024-07-19T16:28:25Z | 21 | 2 | [
"task_categories:text-classification",
"task_categories:fill-mask",
"task_ids:multi-label-classification",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"language:en",
"region:us"
] | [
"text-classification",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: EThOS PhD metadata
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-classification
- fill-mask
task_ids:
- multi-label-classification
- masked-language-modeling
---
# Dataset Card for EThOS PhD metadata
## Table of Contents
- [Dataset Card for blbooksgenre](#dataset-card-for-EThOS PhD metadata)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Supervised tasks](#supervised-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**: https://bl.iro.bl.uk/concern/datasets/10cc13f9-797d-41f2-a7e2-d29f4306133e?locale=en
- **Repository:** https://doi.org/10.23636/rcm4-zk44
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data in this collection comprises the bibliographic metadata for all UK doctoral theses listed in EThOS, the UK's national thesis service. We estimate the data covers around 98% of all PhDs ever awarded by UK Higher Education institutions, dating back to 1787. Thesis metadata from every PhD-awarding university in the UK is included. You can investigate and re-use this unique collection of UK universities' PhD thesis data to analyse trends in postgraduate research, make connections between researchers, apply large data analysis, improve citation of theses and many more applications.
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
#### Supervised tasks
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
[More Information Needed]
### Data Instances
An example data instance:
```python
{'Abstract': ' ',
'Author': 'Loizou, Panos A.',
'Author ISNI': 'https://isni.org/isni/0000000136122593',
'DOI': ' ',
'Date': datetime.datetime(1989, 1, 1, 0, 0),
'EThOS URL': 'https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.232781',
'Funder(s)': ' ',
'IR URL': ' ',
'Institution': 'University of Manchester',
'Institution ISNI': 'https://isni.org/isni/0000000121662407',
'ORCID': ' ',
'Qualification': 'Thesis (Ph.D.)',
'Subject Discipline': 0,
'Supervisor(s)': ' ',
'Title': 'Computation and measurement of turbulent flow through idealized turbine blade passages'}
```
### Data Fields
[More Information Needed]
### Data Splits
This dataset contains a single split `train`.
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The books are licensed under the [CC BY 4.0 Attribution](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
|
nunorc/squad_v1_pt | nunorc | 2024-07-17T06:57:39Z | 127 | 12 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1606.05250",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
pretty_name: SquadV1Pt
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 85322985
num_examples: 87599
- name: validation
num_bytes: 11265418
num_examples: 10570
download_size: 17430106
dataset_size: 96588403
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "squad_v1_pt"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/nunorc/squad-v1.1-pt](https://github.com/nunorc/squad-v1.1-pt)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 39.53 MB
- **Size of the generated dataset:** 96.72 MB
- **Total amount of disk used:** 136.25 MB
### Dataset Summary
Portuguese translation of the SQuAD dataset. The translation was performed automatically using the Google Cloud API.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 39.53 MB
- **Size of the generated dataset:** 96.72 MB
- **Total amount of disk used:** 136.25 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [0],
"text": ["Saint Bernadette Soubirous"]
},
"context": "\"Arquitetonicamente, a escola tem um caráter católico. No topo da cúpula de ouro do edifício principal é uma estátua de ouro da ...",
"id": "5733be284776f41900661182",
"question": "A quem a Virgem Maria supostamente apareceu em 1858 em Lourdes, na França?",
"title": "University_of_Notre_Dame"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| ------- | ----: | ---------: |
| default | 87599 | 10570 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
argilla/dpo-mix-7k | argilla | 2024-07-16T13:31:40Z | 880 | 162 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"library:argilla",
"region:us",
"distilabel",
"synthetic",
"dpo",
"argilla"
] | [] | 2024-01-30T05:49:25Z | null | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: dataset
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_rating
dtype: float64
- name: rejected_rating
dtype: float64
splits:
- name: train
num_bytes: 41362946
num_examples: 6750
- name: test
num_bytes: 4586808
num_examples: 750
download_size: 24232011
dataset_size: 45949754
tags:
- distilabel
- synthetic
- dpo
- argilla
---
# Argilla DPO Mix 7K Dataset
> A small cocktail combining DPO datasets built by Argilla with [distilabel](https://github.com/argilla-io/distilabel). The goal of this dataset is having a small, high-quality DPO dataset by filtering only highly rated chosen responses.
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/Csd2-zPji7iwIxyz6UFe1.webp">
</div>
<p align="center">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
## Datasets mixed
As already mentioned, this dataset mixes the following datasets:
* [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized): random sample of highly scored chosen responses (>=4).
* [`argilla/distilabel-intel-orca-dpo-pairs`](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs): random sample of highly scored chosen responses (>=8).
* [`argilla/ultrafeedback-binarized-preferences-cleaned`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned): random sample of highly scored chosen responses (>=4).
The samples have been randomly selected from the original datasets with a proportion of 0.33 each, as can be seen via the `dataset` column of the dataset.
## Next steps
* Adding more samples
* Use data selection techniques to improve the diversity, usefulness, and complexity of the dataset. |
argilla/distilabel-capybara-dpo-7k-binarized | argilla | 2024-07-16T13:30:29Z | 1,509 | 180 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"library:argilla",
"region:us",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"Roleplay",
"rlaif",
"rlhf",
"dpo",
"distilabel",
"synthetic",
"argilla"
] | [
"question-answering",
"text-generation"
] | 2024-01-26T08:36:14Z | null | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
- rlaif
- rlhf
- dpo
- distilabel
- synthetic
- argilla
pretty_name: CapybaraDPO-7k
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: source
dtype: string
- name: conversation
list:
- name: input
dtype: string
- name: output
dtype: string
- name: original_response
dtype: string
- name: generation_prompt
sequence: string
- name: raw_generation_responses
sequence: string
- name: new_generations
sequence: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rating_chosen
dtype: int64
- name: rating_rejected
dtype: int64
- name: chosen_model
dtype: string
- name: rejected_model
dtype: string
splits:
- name: train
num_bytes: 348791651
num_examples: 7563
download_size: 155776049
dataset_size: 348791651
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Capybara-DPO 7K binarized
> A DPO dataset built with [distilabel](https://github.com/argilla-io/distilabel) atop the awesome [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
> This is a preview version to collect feedback from the community. v2 will include the full base dataset and responses from more powerful models.
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/Vmr0FtTvnny6Snm-UDM_n.png">
</div>
<p align="center">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
## Why?
Multi-turn dialogue data is key to fine-tune capable chat models. Multi-turn preference data has been used by the most relevant RLHF works (Anthropic, Meta Llama2, etc.). Unfortunately, there are very few multi-turn open datasets for DPO/RLHF.
This dataset is the first of a series of datasets to fill this gap for the Open Source AI community.
Why Capybara? Because it's 🔥
## Dataset structure
Here's a video showing the dataset structure using Argilla UI. For preference tuning, chosen and rejected mean the best/worse response to the last turn.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/KoYK-Or0JNNVS9PNLF8jJ.mp4"></video>
## How to use this dataset
This dataset is a multi-turn preference dataset to improve chat capabilities of open-source LLMs. Chosen and rejected pairs are formatted following OpenAI's conversation format with potentially several turns between a user and an assistant.
To use this dataset for DPO use only the last assistant message as `chosen`/`rejected` and the rest as `prompt`.
Let's see an example, step by step.
First let's keep only highly-scored chosen responses (scale is 1-5) and let's filter out very long conversations:
```python
capy = load_dataset("argilla/distilabel-capybara-dpo-7k-binarized", split="train")
capy = capy.filter(
lambda r: r["rating_chosen"]>=4
)
capy = capy.map(lambda r: {"messages": len(r["chosen"])}).filter(lambda r: r["messages"]<18)
```
Then let's prepare this in the chatml prompt and `trl` format:
```python
def chatml_format(example):
# get everything except the last message as input
prompt = tokenizer.apply_chat_template(example["chosen"][:-1], tokenize=False, add_generation_prompt=True)
# get the last assistant responses
chosen = example["chosen"][-1]["content"] + "</s>"
rejected = example["rejected"][-1]["content"] + "</s>"
return {
"prompt": system + prompt,
"chosen": chosen,
"rejected": rejected,
}
# Save columns
original_columns = capy.column_names
# Format dataset
capy = capy.map(
chatml_format,
remove_columns=original_columns
)
```
The dataset is now ready to be used for DPO fine-tuning!
In our benchmarks with 7B models, we've seen this is a challenging dataset to learn from, the best results can be achieved by mixing it with other datasets like this [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k). We'd love to hear from the community how this works with larger models and other hyperparams.
## How we've built this dataset
### Generate responses from 3 different OSS models
In the spirit of UltraFeedback, in this step we generate three responses to the last user message using OSS 7B models and distilabel's `LLMPool` and the vLLM engine. We use Notus7B, NeuralBeagle and OpenHermes-2.5.
Additionally, the original capybara dataset already has a generated assistant response (the last assistant response) we keep it for the next step.
```python
from distilabel.llm import LLM, LLMPool, ProcessLLM
from distilabel.tasks import TextGenerationTask, Task
from distilabel.tasks.prompt import Prompt
from distilabel.dataset import DatasetCheckpoint
from distilabel.pipeline import Pipeline
from datasets import load_dataset
from dataclasses import dataclass
from pathlib import Path
dataset = load_dataset("LDJnr/Capybara", split="train")
here = Path(__file__).parent.resolve()
def extract_conversation(r):
all_but_last = r["conversation"][:-1]
all_but_last.append({"input": r["conversation"][-1]["input"]})
last = r["conversation"][-1]["output"]
return {"input": all_but_last, "original_response": last}
dataset = dataset.map(extract_conversation)
@dataclass
class NotusChatTextGeneration(TextGenerationTask):
# custom class to generate prompts in the chatml format
# skipped for brevity
@dataclass
class ChatMLTextGeneration(TextGenerationTask):
# custom class to generate prompts in the chatml format
# skipped for brevity
save_frequency = len(dataset) // 1000
checkpointing = DatasetCheckpoint(path=here / "checkpoint_generation", save_frequency=save_frequency)
def load_notus(task: Task) -> LLM:
import os
from distilabel.llm import vLLM
from vllm import LLM
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
return vLLM(
vllm=LLM(
model="argilla/notus-7b-v1",
trust_remote_code=True
),
task=task,
max_new_tokens=1024,
temperature=1,
)
def load_beagle(task: Task) -> LLM:
import os
from distilabel.llm import vLLM
from vllm import LLM
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
return vLLM(
vllm=LLM(
model="mlabonne/NeuralBeagle14-7B",
trust_remote_code=True
),
task=task,
max_new_tokens=1024,
temperature=1,
)
def load_hermes(task: Task) -> LLM:
import os
from distilabel.llm import vLLM
from vllm import LLM
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
return vLLM(
vllm=LLM(
model="teknium/OpenHermes-2.5-Mistral-7B",
trust_remote_code=True
),
task=task,
max_new_tokens=1024,
temperature=1,
)
llm_pool = LLMPool(
[
ProcessLLM(task=NotusChatTextGeneration(), load_llm_fn=load_notus),
ProcessLLM(task=ChatMLTextGeneration(), load_llm_fn=load_beagle),
ProcessLLM(task=ChatMLTextGeneration(), load_llm_fn=load_hermes),
]
)
pipe_generation_pool = Pipeline(generator=llm_pool)
dataset = pipe_generation_pool.generate(
dataset=dataset,
num_generations=len(llm_pool.llms),
batch_size=32,
display_progress_bar=True,
checkpoint_strategy=checkpointing,
)
```
### Generate a preference dataset from 4 responses
At this point, we have 4 responses to each multi-turn dialogue. We will now use distilabel's `UltraFeedback.for_overall_quality()` preference model to judge the quality of responses. We use gpt-4-turbo but could have use other models.
```python
from distilabel.tasks import UltraFeedbackTask
from distilabel.llm import OpenAILLM
from distilabel.pipeline import Pipeline
from datasets import load_dataset
def format_conversation(r):
mapping_role = {"input": "<|user|>\n", "output":"<|assistant|>\n"}
all_but_last = r["conversation"][:-1]
all_but_last.append({"input": r["conversation"][-1]["input"]})
input = ""
for e in all_but_last:
for k,v in e.items():
input += f"{mapping_role[k]}{v}</s>\n"
return {"input": input}
# this formats the conversation input
# one could choose other format
prepared_dataset = dataset.map(format_conversation)
# the LLM Judge will evaluate each response to the
# last user message taking into account the conversation history
labeler = OpenAILLM(
task=UltraFeedbackTask.for_overall_quality(),
model="gpt-4-1106-preview",
num_threads=8,
max_new_tokens=512,
)
distilabeler = Pipeline(
labeller=labeler
)
# this computes ratings and natural language critiques for each pair
distiset = distilabeler.generate(dataset=prepared_dataset, num_generations=4, display_progress_bar=True)
```
This preference step is also useful to evaluate the performance of the four models (3+ the original response in Capybara):

## Benchmark results
We've tested this new dataset by preference tuning [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). The resulting model is [CapybaraHermes](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B).
CapybaraHermes has been preference tuned with LoRA and TRL for 3 epochs using argilla's [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k).
To test the impact on multi-turn performance we have used MTBench. We also include the Nous Benchmark results and Mistral-7B-Instruct-v0.2 for reference as it's a strong 7B model on MTBench:
| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | MTBench First Turn | MTBench Second Turn | Nous avg. | MTBench avg. |
|-----------------------------------|---------|---------|------------|----------|------------|-------------|-----------|--------------|
| CapybaraHermes-2.5-Mistral-7B | **43.8** | **73.35** | 57.07 | **42.44** | 8.24375 | **7.5625** | 54.16 | **7.903125** |
| teknium/OpenHermes-2.5-Mistral-7B | 42.75 | 72.99 | 52.99 | 40.94 | **8.25** | 7.2875 | 52.42 | 7.76875 |
| Mistral-7B-Instruct-v0.2 | 38.5 | 71.64 | **66.82** | 42.29 | 7.8375 | 7.1 | **54.81** | 7.46875 |
The most interesting aspect in the context of the capybara-dpo dataset is the increased performance in MTBench Second Turn scores.
For the merge lovers, we also preference tuned Beagle14-7B with a mix of capybara-dpo and distilabel orca pairs using the same recipe as NeuralBeagle (see [ YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) for reference):
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)| 45.29| 76.92| 71.66| 48.78| 60.66| |
Locutusque/function-calling-chatml | Locutusque | 2024-07-16T11:50:45Z | 101 | 163 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-02-02T21:51:35Z | null | ---
dataset_info:
features:
- name: system_message
dtype: string
- name: function_description
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 311913135
num_examples: 112960
download_size: 107035875
dataset_size: 311913135
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
---
# Dataset Card for "function-calling-chatml"
Converted glaiveai/Glaive-function-calling-v2 to chatml format.
## Example entry
```
[ { "from": "system", "value": "You are a helpful assistant with access to the following functions. Use them if required -{\n \"name\": \"create_contact\",\n \"description\": \"Create a new contact\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The name of the contact\"\n },\n \"email\": {\n \"type\": \"string\",\n \"description\": \"The email address of the contact\"\n }\n },\n \"required\": [\n \"name\",\n \"email\"\n ]\n }\n}" }, { "from": "human", "value": "I need to create a new contact for my friend John Doe. His email is [email protected]." }, { "from": "function-call", "value": "{\"name\": \"create_contact\", \"arguments\": '{\"name\": \"John Doe\", \"email\": \"[email protected]\"}'}" }, { "from": "function-response", "value": "{\"status\": \"success\", \"message\": \"Contact for John Doe with email [email protected] has been created successfully.\"}" }, { "from": "gpt", "value": "I have successfully created a new contact for your friend John Doe with the email [email protected]." } ]
``` |
CAiRE/ASCEND | CAiRE | 2024-07-16T08:56:04Z | 376 | 35 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2112.06223",
"region:us",
"speech-recognition",
"code-switching"
] | [
"automatic-speech-recognition"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in
Multi-turn Conversation'
tags:
- speech-recognition
- code-switching
dataset_info:
config_name: main
features:
- name: id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: duration
dtype: float32
- name: language
dtype: string
- name: original_speaker_id
dtype: int64
- name: session_id
dtype: int64
- name: topic
dtype: string
splits:
- name: train
num_bytes: 1014573740.14
num_examples: 9869
- name: test
num_bytes: 106171230.135
num_examples: 1315
- name: validation
num_bytes: 106772517.43
num_examples: 1130
download_size: 1223536062
dataset_size: 1227517487.7050002
configs:
- config_name: main
data_files:
- split: train
path: main/train-*
- split: test
path: main/test-*
- split: validation
path: main/validation-*
default: true
---
# Dataset Card for ASCEND
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2112.06223
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set.
### Supported Tasks and Leaderboards
Code-switching
### Languages
Chinese and English
## Usage
To obtain the full dataset (complete with train, validation, and test set), simply run this:
```
import datasets
dataset = datasets.load_dataset("CAiRE/ASCEND")
```
## Dataset Structure
A typical data point comprises the path to the audio file, the loaded audio array, and its transcription. Additional fields include datapoint id, duration, language, speaker id, session id, and topic.
```
{
'id': '00644',
'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses2_spk3_L13101_189.900_5.490.wav',
'audio': {
'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses2_spk3_L13101_189.900_5.490.wav',
'array': array([-6.1035156e-05, -1.8310547e-04, 3.0517578e-05, ...,
0.0000000e+00, -3.0517578e-05, 0.0000000e+00
], dtype = float32),
'sampling_rate': 16000
},
'transcription': '因为你不可能邀你的female friends去说走我们去play basketball',
'duration': 5.489999771118164,
'language': 'mixed',
'original_speaker_id': 3,
'session_id': 2,
'topic': 'sports'
}
```
### Data Splits
Number of utterances: 9,869 train, 1,130 validation, and 1,315 test.
## Additional Information
For comprehensive explanations, please check [our paper](https://arxiv.org/pdf/2112.06223.pdf).
### Licensing Information
Creative Common Attribution Share-Alike 4.0 International (CC-BY-SA 4.0)
### Citation Information
If you use our dataset, please cite us:
```
@inproceedings{lovenia2022ascend,
title={ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation},
author={Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
``` |
b3x0m/Chinese-H-Novels | b3x0m | 2024-07-12T02:32:57Z | 1,036 | 211 | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:token-classification",
"task_categories:text2text-generation",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:sentence-similarity",
"language:zh",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"art"
] | [
"text-classification",
"summarization",
"token-classification",
"text2text-generation",
"question-answering",
"text-generation",
"fill-mask",
"sentence-similarity"
] | 2023-11-27T17:19:10Z | null | ---
language:
- zh
size_categories:
- 1B<n<10B
task_categories:
- text-classification
- summarization
- token-classification
- text2text-generation
- question-answering
- text-generation
- fill-mask
- sentence-similarity
pretty_name: H-novel-corpus
tags:
- art
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 95784400372
num_examples: 934354429
download_size: 60873072258
dataset_size: 95784400372
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Update 12/07/2024: convert to parquet to download easier.
Chinese 18+ novels corpus, use at your own risk, you and only you are responsible for every choice you make.
(͡ ° ͜ʖ ͡ °)
tags: socks, garter belt, foot fetish, ntr, netori.....
Thanks [Moleys/Numeron](https://huggingface.co/Numeronvr) for the dataset donation. |
nyu-visionx/Cambrian-10M | nyu-visionx | 2024-07-08T04:34:51Z | 9,177 | 113 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2406.16860",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | 2024-05-30T03:27:31Z | null | ---
task_categories:
- visual-question-answering
- question-answering
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
---
# Cambrian-10M Dataset
**Please see paper & website for more information:**
- https://cambrian-mllm.github.io/
- https://arxiv.org/abs/2406.16860
## Overview
Cambrian-10M is a comprehensive dataset designed for instruction tuning, particularly in multimodal settings involving visual interaction data. The dataset is crafted to address the scarcity of high-quality multimodal instruction-tuning data and to maintain the language abilities of multimodal large language models (LLMs).
## Data Collection
### Multimodal Data Sources
Unlike language data, multimodal instruction-tuning data is much rarer and harder to collect. To address this, we leverage existing multimodal benchmarks and datasets involving visual interaction data, such as Visual Question Answering (VQA) and Optical Character Recognition (OCR) data. This approach helps mitigate the catastrophic forgetting commonly observed when fine-tuning multimodal LLMs.
### Language-Only Instruction-Following Data
To ensure the preservation of language capabilities, we also collect a small volume of high-quality language-only instruction-following data from the community.
### Targeted Internet Data Collection Engine
We introduce a data engine designed to create large-scale, reliable, high-quality knowledge-based multimodal instruction tuning data. The engine works as follows:
1. **Field and Subfield Selection**: The engine selects a target field and subfield, such as “Physics”.
2. **Topic Identification**: An LLM like GPT-4 identifies topics within the field (e.g., “Newton’s Laws”).
3. **Reliable Source Search**: The engine searches reliable sources like Wikipedia for each topic.
4. **Text-Image Association Extraction**: The parser extracts image-caption-text tuples from the sources.
5. **Q&A Pair Generation**: The caption-text is fed to an LLM, such as GPT-3.5, to generate instruction-type Q&A pairs about the image.
These Q&A pairs, along with the images, form our VQA dataset.
### GPT Rewriting
We also incorporate recent MLLMs such as GPT-4v and GPT-4o to generate extended responses and free-form instruction tuning data. To play with gpt generated data, use
[gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl), Curated [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl)
- [gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl) contains more extended responses from Cambrian-10M.
- [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl) contains more creative data in visual interactions.
## Cambrian-10M Composition
The Cambrian-10M dataset consists of approximately 9.784 million data points, offering a diverse range of data for various research applications. The composition of the dataset is visualized in Fig. 9.
## Cambrian-7M
We make an initial effort to study data curation. In particular, we find the following data ratio to perform most optimally
- **Language**: 21.00%
- **General**: 34.52%
- **OCR**: 27.22%
- **Counting**: 8.71%
- **Math**: 7.20%
- **Code**: 0.87%
- **Science**: 0.88%

## Getting Started with Cambrian Data
Before you start, ensure you have sufficient storage space to download and process the data.
Cambrian-10M contains a total of 10 million images collected from previous datasets, an internet data engine, and GPT-generated instruction tuning data. Follow these steps to get started:
1. **Download the Data Repository**
Download the data repository. Note that due to Hugging Face policy constraints, the data folder is archived into tar files. We also split the `allava` and `data_engine` data into smaller tar files because they exceed the 50 GB size limit.
2. **Merge Tar Files**
To explore the Cambrian-10M dataset, first merge the different parts of `allava` and `data_engine` together:
```bash
python merge_tars.py
```
3. **Extract Tar Files**
Then, extract all the tar files into the current directory:
```bash
python extract.py
```
4. **Training with Cambrian**
You can train with the raw [Cambrian10M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian10M.jsonl), Curated [Cambrian7M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian7M.jsonl). We recommend using
the Curated [Cambrian7M with system prompt](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/blob/main/jsons/Cambrian7M_withsystemprompt.jsonl) that also alleviates 'answer machine' problem. |
SetFit/bbc-news | SetFit | 2024-07-04T13:00:48Z | 886 | 16 | [
"task_categories:text-classification",
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
task_categories:
- text-classification
language:
- en
pretty_name: BBC News Topic Dataset
---
# BBC News Topic Dataset
Dataset on [BBC News Topic Classification](https://www.kaggle.com/yufengdev/bbc-text-categorization/data) consisting of 2,225 articles published on the BBC News website corresponding during 2004-2005. Each article is labeled under one of 5 categories: business, entertainment, politics, sport or tech.
Original source for this dataset:
- Derek Greene, Pádraig Cunningham, “Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering,” in Proc. 23rd International Conference on Machine learning (ICML’06), 2006
[Raw dataset](http://mlg.ucd.ie/datasets/bbc.html) - [Paper PDF](http://derekgreene.com/papers/greene06icml.pdf)
All rights, including copyright, in the content of the original articles are owned by the BBC. |
lmms-lab/Video-MME | lmms-lab | 2024-07-04T08:14:20Z | 17,892 | 40 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-07T12:06:37Z | null | ---
dataset_info:
config_name: videomme
features:
- name: video_id
dtype: string
- name: duration
dtype: string
- name: domain
dtype: string
- name: sub_category
dtype: string
- name: url
dtype: string
- name: videoID
dtype: string
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1003241.0
num_examples: 2700
download_size: 405167
dataset_size: 1003241.0
configs:
- config_name: videomme
data_files:
- split: test
path: videomme/test-*
---
|
agentsea/wave-ui-25k | agentsea | 2024-07-03T04:13:40Z | 1,118 | 32 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-28T18:38:08Z | 2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: instruction
dtype: string
- name: bbox
sequence: float64
- name: resolution
sequence: int64
- name: source
dtype: string
- name: platform
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: type
dtype: string
- name: OCR
dtype: string
- name: language
dtype: string
- name: purpose
dtype: string
- name: expectation
dtype: string
splits:
- name: train
num_bytes: 10884587492.75
num_examples: 24978
download_size: 10577935056
dataset_size: 10884587492.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# WaveUI-25k
This dataset contains 25k examples of labeled UI elements. It is a subset of a collection of ~80k preprocessed examples assembled from the following sources:
- [WebUI](https://uimodeling.github.io/)
- [RoboFlow](https://universe.roboflow.com/roboflow-gw7yv/website-screenshots)
- [GroundUI-18K](https://huggingface.co/datasets/agent-studio/GroundUI-18K)
These datasets were preprocessed to have matching schemas and to filter out unwanted examples, such as duplicated, overlapping and low-quality datapoints. We also filtered out many text elements which were not in the main scope of this work.
The WaveUI-25k dataset includes the original fields from the source datasets, as well as the following additional fields, obtained from the annotation process:
- `name`: A descriptive name of the element.
- `description`: A long detailed description of the element
- `type`: The type of the element.
- `OCR`: OCR of the element. Set to `null` if no text is available.
- `language`: The language of the OCR text, if available. Set to `null` if no text is available.
- `purpose`: A general purpose of the element.
- `expectation`: An expectation on what will happen when you click this element.
The dataset can be visualized better in [this space](https://huggingface.co/spaces/agentsea/wave-ui-viz).
## Data sources
- [WebUI](https://uimodeling.github.io/)
```bibtex
@article{wu2023webui,
title={WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics},
author={Jason Wu and Siyan Wang and Siman Shen and Yi-Hao Peng and Jeffrey Nichols and Jeffrey Bigham},
journal={ACM Conference on Human Factors in Computing Systems (CHI)},
year={2023}
}
```
- [RoboFlow](https://universe.roboflow.com/roboflow-gw7yv/website-screenshots)
```bibtex
@misc{
website-screenshots_dataset,
title = { Website Screenshots Dataset },
type = { Open Source Dataset },
author = { Brad Dwyer },
howpublished = { \url{ https://universe.roboflow.com/roboflow-gw7yv/website-screenshots } },
url = { https://universe.roboflow.com/roboflow-gw7yv/website-screenshots },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { aug },
note = { visited on 2024-07-02 },
}
```
- [GroundUI-18K](https://huggingface.co/datasets/agent-studio/GroundUI-18K)
```bibtex
@article{zheng2024agentstudio,
title={AgentStudio: A Toolkit for Building General Virtual Agents},
author={Longtao Zheng and Zhiyuan Huang and Zhenghai Xue and Xinrun Wang and Bo An and Shuicheng Yan},
journal={arXiv preprint arXiv:2403.17918},
year={2024}
}
``` |
QuietImpostor/Claude-3-Opus-Claude-3.5-Sonnnet-9k | QuietImpostor | 2024-06-30T20:43:11Z | 90 | 22 | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2024-06-30T20:39:49Z | 2 | ---
task_categories:
- text-generation
pretty_name: Claude 3 Opus + Claude 3.5 Sonnet Ultrachat ShareGPT
size_categories:
- 1K<n<10K
---
# Overview
This dataset is a combination of samples from Sao10k's original Claude 3 Opus dataset and a personally created Claude 3.5 Sonnet dataset.
Due to budget constraints, approximately 700 samples are from Claude 3.5 Sonnet, with the remainder sourced from the Claude 3 Opus dataset. |
nnenufar/speakerVerification_PTBR | nnenufar | 2024-06-29T14:20:34Z | 41,704 | 0 | [
"task_categories:audio-classification",
"language:pt",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:audio",
"region:us"
] | [
"audio-classification"
] | 2024-06-27T01:07:59Z | null | ---
license: apache-2.0
task_categories:
- audio-classification
language:
- pt
size_categories:
- 10K<n<100K
---
# Dataset card
<!-- Provide a quick summary of the dataset. -->
This dataset includes ~80k samples of speech audio in Brazilian Portuguese. Samples have variable length ranging from 1 to 4 seconds, with a sampling rate of 16kHz. The metadata file includes speaker tags and corresponding labels for each sample, making it appropriate for speaker identification and speaker verification tasks.
## Dataset Description
Audio samples are taken from three bigger corpora: C-ORAL Brasil, NURC Recife and NURC SP. Please take into consideration the licenses details of the original corpora.
All the recordings come from monologues, but there are brief moments of interruption by the researchers that conducted the recordings.
The recordings comprise accents from three different states of Brazil: Belo Horizonte, São Paulo and Recife.
The dataset is structured in a way that makes it possible to load with HF Audiofolder. In my experience, I noticed that it is easier to clone the dataset repo to your machine and then load it specifying the local data_dir than to load it directly from the hub using the dataset tag.
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **C-ORAL BRASIL:** [https://www.c-oral-brasil.org/c-oral-brasil-i.php]
- **NURC Recife:** [https://fale.ufal.br/projeto/nurcdigital/index.php?action=home]
- **NURC SP:** [http://tarsila.icmc.usp.br:8080/nurc/home]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
The models trained with this dataset should not be used to intentionally create hostile or alienating environments for people. |
ncbi/MedCalc-Bench-v1.0 | ncbi | 2024-06-26T20:51:59Z | 501 | 13 | [
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-06-12T12:44:02Z | 2 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: Row Number
dtype: int64
- name: Calculator ID
dtype: int64
- name: Calculator Name
dtype: string
- name: Category
dtype: string
- name: Output Type
dtype: string
- name: Note ID
dtype: string
- name: Note Type
dtype: string
- name: Patient Note
dtype: string
- name: Question
dtype: string
- name: Relevant Entities
dtype: string
- name: Ground Truth Answer
dtype: string
- name: Lower Limit
dtype: string
- name: Upper Limit
dtype: string
- name: Ground Truth Explanation
dtype: string
splits:
- name: train
num_bytes: 41265322
num_examples: 10053
- name: test
num_bytes: 4043748
num_examples: 1047
download_size: 19670625
dataset_size: 45309070
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
MedCalc-Bench is the first medical calculation dataset used to benchmark LLMs ability to serve as clinical calculators. Each instance in the dataset consists of a patient note, a question asking to compute a specific clinical value, an final answer value, and a step-by-step solution explaining how the final answer was obtained. Our dataset covers 55 different calculation tasks. We hope this dataset serves as a call to improve the verbal and computational reasoning skills of LLMs in medical settings.
This dataset contains a training dataset of 10,053 instances and a testing dataset of 1,047 instances.
## Contents inside the Training and Testing CSV
To download the CSV for the MedCalc-Bench evaluation dataset, please download the file, test_data.csv inside the dataset folder of this repository. You can also download the test set split from HuggingFace at https://huggingface.co/datasets/ncbi/MedCalc-Bench.
In addition to the 1,047 evaluation instances, we also provide a training dataset of 10,053 instances which can be used for fine-tuning open-source LLMs (see Section C of the Appendix).
Each Instance in the dataset contains the following information:
- Row Number: Specifies the index of the instance.
- Calculator ID: Specifies the integer ID of the calculator.
- Calculator Name: Specifies the name of the clinical calculation task.
- Category: Specifies the sub-category of the calculator. For equation-based calculators, the options are lab test, dosage, date, or physical and for rule-based calculators, the options are risk, severity, and diagnosis.
- Output Type: Specifies the format type that the calculator will return. The options are decimal, integer, date (MM/DD/YY), or time in terms of weeks and days (i.e. (17 weeks, 4 days)).
- Note ID: Specifies the ID of the patient note. The ID of the note will either be the ID given by Open-Patients or it will be an integer value if the patient note was handwritten by clinicians or synthesized by a template.
- Note Type: Specifies whether the patient note was synthesized by a clinician (Handwritten), produced from a template (Template), or was extracted from PMC-Patients (extracted).
- Patient Note: Specifies the patient note which provides the information needed to compute the final answer.
- Question: Specifies the question that is asked to the model to compute a specific medical value based on a particular calculator.
- Relevant Entities: Provides a dictionary of the parameters and their extracted values based on the patient note.
- Ground Truth Answer: Specifies the ground truth value without any units for the medical value that needs to be calculated.
- Lower Limit: For equation-based calculators whose output is a decimal, this value is 95% of the ground truth answer value. For all other cases, the lower limit is the same as the ground-truth value.
- Upper Limit: For equation-based calculators whose output is a decimal, this value is 105% of the ground truth answer value. For all other cases, the upper limit is the same as the ground-truth value.
- Ground Truth Explanation: The ground truth explanation for the data instance providing a step-by-step explanation for how the final answer was obtained.
## How to Use MedCalc-Bench
The training dataset of MedCalc-Bench can be used for fine-tunining LLMs. We have provided both the fine-tuned models and code for fine-tuning at our repository link: https://github.com/ncbi-nlp/MedCalc-Bench.
The test set of MedCalc-Bench is helpful for benchamrking LLMs under different settings. We provide instructions in the README of this repository for how to reproduce all of our results for all of the models using the different prompt settings.
By experimenting with different LLMs and prompts, we hope our dataset demonstrates the potential and limitations of LLMs in clinical settings.
## License
Both the training and testing dataset of MedCalc-Bench are released under the CC-BY-SA 4.0 license. |
PleIAs/YouTube-Commons | PleIAs | 2024-06-26T08:08:14Z | 763 | 346 | [
"task_categories:text-generation",
"language:en",
"language:fr",
"language:es",
"language:pt",
"language:de",
"language:ru",
"license:cc-by-4.0",
"region:us",
"conversational"
] | [
"text-generation"
] | 2024-04-15T22:12:27Z | null | ---
language:
- en
- fr
- es
- pt
- de
- ru
license: cc-by-4.0
task_categories:
- text-generation
pretty_name: Youtube Commons Corpus
tags:
- conversational
dataset_info:
features:
- name: video_id
dtype: string
- name: video_link
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: channel
dtype: string
- name: channel_id
dtype: string
- name: date
dtype: string
- name: license
dtype: string
- name: original_language
dtype: string
- name: source_language
dtype: string
- name: transcription_language
dtype: string
- name: word_count
dtype: int64
- name: character_count
dtype: int64
splits:
- name: train
num_bytes: 3284822536
num_examples: 250000
download_size: 1830819739
dataset_size: 3284822536
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# 📺 YouTube-Commons 📺
**YouTube-Commons** is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC-By license.
## Content
The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).
In total, this represents nearly 45 billion words (44,811,518,375).
All the videos where shared on YouTube with a CC-BY license: the dataset provide all the necessary provenance information including the title, link, channel name and upload date.
The corpus is multilingual with a majority of English-speaking content (71%) for original languages. Automated translations are provided for nearly all the videos in English, French, Spanish, German, Russian, Italian and Dutch.
## Uses
The collection aims to expand the availability of conversational data for research in AI, computational social science and digital humanities.
Most of the available resources under free licenses are written texts such as public domain works or open science articles.
The text can be used for training model and republished with for reproducibility purposes.
## License and ethics
All the transcripts are part of a video shared under a CC-By license. In accordance with the provision of the license, every YouTube channels is fully credited.
While content under a free license can be lawfully reproduced in any setting, there is currently a debate over the legitimacy and proper ethical use of free content for pre-training large language models.
In accordance with the philosophy of Creative Commons, we recommend that this set be preferably used for open research. Furthermore, the license requires that contribution of each individual author is properly credited. In a research context, the best way to achieve this aim would be to fully release the data sources used for training or, at the very least, provide an extensive open documentation.
## Future developments
The collection is far from covering the total amount of available YouTube videos under a Creative Commons license. We will continue to expand it significantly.
Other additional release will also focus on transcripts from other video sources not available on YouTube (especially from public service/university websites).
## Acknowledgements
The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).
Pleias corpus collection projects have been also facilitated thanks to the open science LLM community support, insights and cooperation (Occiglot, Eleuther AI, Allen AI).
<div style="text-align: center;">
<img src="https://github.com/mch-dd/datasetlogo/blob/main/scaleway.jpeg?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://github.com/mch-dd/datasetlogo/blob/main/ministere.png?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://github.com/mch-dd/datasetlogo/blob/main/occiglot.jpg?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/>
</div>
|
community-datasets/urdu_fake_news | community-datasets | 2024-06-26T07:46:11Z | 114 | 2 | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:ur",
"license:unknown",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ur
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- intent-classification
pretty_name: Bend the Truth (Urdu Fake News)
dataset_info:
features:
- name: news
dtype: string
- name: label
dtype:
class_label:
names:
'0': Fake
'1': Real
- name: category
dtype:
class_label:
names:
'0': bus
'1': hlth
'2': sp
'3': tch
'4': sbz
splits:
- name: train
num_bytes: 1762901
num_examples: 638
- name: test
num_bytes: 799583
num_examples: 262
download_size: 1187921
dataset_size: 2562484
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for Bend the Truth (Urdu Fake News)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/MaazAmjad/Datasets-for-Urdu-news/)
- **Repository:** [Github](https://github.com/MaazAmjad/Datasets-for-Urdu-news/)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Maaz Amjad](https://github.com/MaazAmjad)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- news: a string in urdu
- label: the label indicating whethere the provided news is real or fake.
- category: The intent of the news being presented. The available 5 classes are Sports, Health, Technology, Entertainment, and Business.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset. |
community-datasets/tamilmixsentiment | community-datasets | 2024-06-26T07:04:23Z | 94 | 4 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"language:ta",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- ta
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Tamilmixsentiment
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Positive
'1': Negative
'2': Mixed_feelings
'3': unknown_state
'4': not-Tamil
splits:
- name: train
num_bytes: 790124
num_examples: 11335
- name: validation
num_bytes: 89614
num_examples: 1260
- name: test
num_bytes: 218760
num_examples: 3149
download_size: 708889
dataset_size: 1098498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for Tamilmixsentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tamilmixsentiment Homepage](https://dravidian-codemix.github.io/2020/index.html)
- **Repository:** [Tamilmixsentiment repository](https://dravidian-codemix.github.io/2020/datasets.html)
- **Paper:** [Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text](https://www.aclweb.org/anthology/2020.sltu-1.28/)
- **Leaderboard:** [Rank list](https://drive.google.com/file/d/1Mf8-No-63koGRwdF13RrO01NAFBlNmI0/view?usp=sharing)
- **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:[email protected])
### Dataset Summary
The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated with sentiment polarity at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Supported Tasks and Leaderboards
To identify sentiment polarity of the code-mixed dataset of comments/posts in Tamil-English collected from social media.
### Languages
Tamil-English code-switched. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between.
## Dataset Structure
### Data Instances
An example from the Tamilmixsentiment train set looks as follows:
```
text label
Trailer late ah parthavanga like podunga Positive
```
### Data Fields
- `text`: Tamil-English code-mixed comment.
- `label`: list of the possible sentiments "Positive", "Negative", "Mixed_feelings", "unknown_state", "not-Tamil"
### Data Splits
The entire dataset of 15,744 sentences was randomly shuffled and split into three parts as follows:
| | train | validation | test |
|------------------------------|------:|-----------:|-----:|
| Tamilmixsentiment | 11335 | 1260 | 3149 |
## Dataset Creation
### Curation Rationale
Sentiment analysis has become important in social media research (Yang and Eisenstein, 2017). Until recently these applications were created for high-resourced languages which analysed monolingual utterances. But social media in multilingual communities contains more code-mixed text. Code-mixing is common among speakers in a bilingual speech community. As English is seen as the language of prestige and education, the influence of lexicon, connectives and phrases from English language is common in spoken Tamil. Tamil has little annotated data for code-mixed scenarios. An annotated corpus developed for monolingual data cannot deal with code-mixed usage and therefore it fails to yield good results due to mixture of languages at different levels of linguistic analysis. Therefore this dataset of code-mixed Tamil-English sentiment annotated corpus is created.
### Source Data
#### Initial Data Collection and Normalization
The data was scraped from Youtube. In total 184,573 sentences for Tamil from YouTube comments from the trailers of a movies released in 2019. Many of the them contained sentences
that were either entirely written in English or code-mixed Tamil-English or fully written in Tamil. So we filtered out a non-code-mixed corpus based on language identification
at comment level using the langdetect library. The comment is written fully in Tamil or English, we discarded that comment since monolingual resources are available for these languages. We also identified if the sentences were written in other languages such as Hindi, Malayalam, Urdu, Telugu, and Kannada. We preprocessed the comments by removing the emoticons and applying a sentence
length filter. We want to create a code-mixed corpus of reasonable size with sentences that have fairly defined sentiments which will be useful for future research. Thus our filter removed sentences with less than five words and more than 15 words after cleaning the data. In the end we got 15,744 Tanglish sentences.
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
Three steps complete the annotation setup. First, each sentence was annotated by two people. In the second step, the data were collected if both of them agreed. In the case of conflict, a third person annotated the sentence. In the third step, if all the three of them did not agree, then two more annotators annotated the sentences.
#### Who are the annotators?
Eleven volunteers were involved in the process. All of them were native speakers of Tamil with diversity in gender, educational level and medium of instruction in their school education.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{chakravarthi-etal-2020-corpus,
title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text",
author = "Chakravarthi, Bharathi Raja and
Muralidaran, Vigneshwaran and
Priyadharshini, Ruba and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.28",
pages = "202--210",
abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",
language = "English",
ISBN = "979-10-95546-35-1",
}
```
### Contributions
Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset. |
community-datasets/senti_ws | community-datasets | 2024-06-26T06:37:47Z | 105 | 2 | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-scoring",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:de",
"license:cc-by-sa-3.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"token-classification",
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- found
language:
- de
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
- text-classification
task_ids:
- text-scoring
- sentiment-scoring
- part-of-speech
pretty_name: SentiWS
dataset_info:
- config_name: pos-tagging
features:
- name: word
dtype: string
- name: pos-tag
dtype:
class_label:
names:
'0': NN
'1': VVINF
'2': ADJX
'3': ADV
splits:
- name: train
num_bytes: 75526
num_examples: 3471
download_size: 37314
dataset_size: 75526
- config_name: sentiment-scoring
features:
- name: word
dtype: string
- name: sentiment-score
dtype: float32
splits:
- name: train
num_bytes: 61642
num_examples: 3471
download_size: 45116
dataset_size: 61642
configs:
- config_name: pos-tagging
data_files:
- split: train
path: pos-tagging/train-*
default: true
- config_name: sentiment-scoring
data_files:
- split: train
path: sentiment-scoring/train-*
---
# Dataset Card for SentiWS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://wortschatz.uni-leipzig.de/en/download
- **Repository:** [Needs More Information]
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2010/pdf/490_Paper.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative polarity bearing words weighted within the interval of [-1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS contains around 1,650 positive and 1,800 negative words, which sum up to around 16,000 positive and 18,000 negative word forms incl. their inflections, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one.
### Supported Tasks and Leaderboards
Sentiment-Scoring, Pos-Tagging
### Languages
German
## Dataset Structure
### Data Instances
For pos-tagging:
```
{
"word":"Abbau"
"pos_tag": 0
}
```
For sentiment-scoring:
```
{
"word":"Abbau"
"sentiment-score":-0.058
}
```
### Data Fields
SentiWS is UTF8-encoded text.
For pos-tagging:
- word: one word as a string,
- pos_tag: the part-of-speech tag of the word as an integer,
For sentiment-scoring:
- word: one word as a string,
- sentiment-score: the sentiment score of the word as a float between -1 and 1,
The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1].
### Data Splits
train: 1,650 negative and 1,818 positive words
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
### Citation Information
@INPROCEEDINGS{remquahey2010,
title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis},
booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)},
author = {Remus, R. and Quasthoff, U. and Heyer, G.},
year = {2010}
}
### Contributions
Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
community-datasets/qanta | community-datasets | 2024-06-26T06:06:55Z | 50,006 | 4 | [
"task_categories:question-answering",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1904.04792",
"region:us",
"quizbowl"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: quizbowl
pretty_name: Quizbowl
tags:
- quizbowl
dataset_info:
- config_name: mode=first,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 117599150
num_examples: 96221
- name: buzztrain
num_bytes: 19699616
num_examples: 16706
- name: guessdev
num_bytes: 1414822
num_examples: 1055
- name: buzzdev
num_bytes: 1553576
num_examples: 1161
- name: guesstest
num_bytes: 2997063
num_examples: 2151
- name: buzztest
num_bytes: 2653365
num_examples: 1953
- name: adversarial
num_bytes: 1258784
num_examples: 1145
download_size: 90840024
dataset_size: 147176376
- config_name: mode=full,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 168874612
num_examples: 96221
- name: buzztrain
num_bytes: 27989445
num_examples: 16706
- name: guessdev
num_bytes: 2098857
num_examples: 1055
- name: buzzdev
num_bytes: 2301145
num_examples: 1161
- name: guesstest
num_bytes: 4434626
num_examples: 2151
- name: buzztest
num_bytes: 3930150
num_examples: 1953
- name: adversarial
num_bytes: 1799969
num_examples: 1145
download_size: 133005755
dataset_size: 211428804
- config_name: mode=runs,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 3975570298
num_examples: 2641161
- name: buzztrain
num_bytes: 622976884
num_examples: 433552
- name: guessdev
num_bytes: 55281178
num_examples: 33602
- name: buzzdev
num_bytes: 60226416
num_examples: 36803
- name: guesstest
num_bytes: 120192213
num_examples: 70772
- name: buzztest
num_bytes: 104422131
num_examples: 63050
- name: adversarial
num_bytes: 37874827
num_examples: 27986
download_size: 306157359
dataset_size: 4976543947
- config_name: mode=sentences,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 629450237
num_examples: 505321
- name: buzztrain
num_bytes: 98941633
num_examples: 82574
- name: guessdev
num_bytes: 9112676
num_examples: 6818
- name: buzzdev
num_bytes: 9924887
num_examples: 7451
- name: guesstest
num_bytes: 19470155
num_examples: 14069
- name: buzztest
num_bytes: 17011859
num_examples: 12610
- name: adversarial
num_bytes: 6491504
num_examples: 5812
download_size: 150604036
dataset_size: 790402951
configs:
- config_name: mode=first,char_skip=25
data_files:
- split: guesstrain
path: mode=first,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=first,char_skip=25/buzztrain-*
- split: guessdev
path: mode=first,char_skip=25/guessdev-*
- split: buzzdev
path: mode=first,char_skip=25/buzzdev-*
- split: guesstest
path: mode=first,char_skip=25/guesstest-*
- split: buzztest
path: mode=first,char_skip=25/buzztest-*
- split: adversarial
path: mode=first,char_skip=25/adversarial-*
- config_name: mode=full,char_skip=25
data_files:
- split: guesstrain
path: mode=full,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=full,char_skip=25/buzztrain-*
- split: guessdev
path: mode=full,char_skip=25/guessdev-*
- split: buzzdev
path: mode=full,char_skip=25/buzzdev-*
- split: guesstest
path: mode=full,char_skip=25/guesstest-*
- split: buzztest
path: mode=full,char_skip=25/buzztest-*
- split: adversarial
path: mode=full,char_skip=25/adversarial-*
- config_name: mode=runs,char_skip=25
data_files:
- split: guesstrain
path: mode=runs,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=runs,char_skip=25/buzztrain-*
- split: guessdev
path: mode=runs,char_skip=25/guessdev-*
- split: buzzdev
path: mode=runs,char_skip=25/buzzdev-*
- split: guesstest
path: mode=runs,char_skip=25/guesstest-*
- split: buzztest
path: mode=runs,char_skip=25/buzztest-*
- split: adversarial
path: mode=runs,char_skip=25/adversarial-*
- config_name: mode=sentences,char_skip=25
data_files:
- split: guesstrain
path: mode=sentences,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=sentences,char_skip=25/buzztrain-*
- split: guessdev
path: mode=sentences,char_skip=25/guessdev-*
- split: buzzdev
path: mode=sentences,char_skip=25/buzzdev-*
- split: guesstest
path: mode=sentences,char_skip=25/guesstest-*
- split: buzztest
path: mode=sentences,char_skip=25/buzztest-*
- split: adversarial
path: mode=sentences,char_skip=25/adversarial-*
---
# Dataset Card for "qanta"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.qanta.org/](http://www.qanta.org/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792)
- **Point of Contact:** [Jordan Boyd-Graber](mailto:[email protected])
- **Size of downloaded dataset files:** 170.75 MB
- **Size of the generated dataset:** 147.18 MB
- **Total amount of disk used:** 317.93 MB
### Dataset Summary
The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### mode=first,char_skip=25
- **Size of downloaded dataset files:** 170.75 MB
- **Size of the generated dataset:** 147.18 MB
- **Total amount of disk used:** 317.93 MB
An example of 'guessdev' looks as follows.
```
This example was too long and was cropped:
{
"answer": "Apollo_program",
"category": "History",
"char_idx": -1,
"dataset": "quizdb.org",
"difficulty": "easy_college",
"first_sentence": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"",
"fold": "guessdev",
"full_question": "\"As part of this program, William Anders took a photo that Galen Rowell called \\\"the most influential environmental photograph e...",
"gameplay": false,
"id": "127028-first",
"page": "Apollo_program",
"proto_id": "",
"qanta_id": 127028,
"qdb_id": 126689,
"raw_answer": "Apollo program [or Project Apollo; accept Apollo 8; accept Apollo 1; accept Apollo 11; prompt on landing on the moon]",
"sentence_idx": -1,
"subcategory": "American",
"text": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"",
"tokenizations": [[0, 137], [138, 281], [282, 412], [413, 592], [593, 675]],
"tournament": "ACF Fall",
"year": 2016
}
```
### Data Fields
The data fields are the same among all splits.
#### mode=first,char_skip=25
- `id`: a `string` feature.
- `qanta_id`: a `int32` feature.
- `proto_id`: a `string` feature.
- `qdb_id`: a `int32` feature.
- `dataset`: a `string` feature.
- `text`: a `string` feature.
- `full_question`: a `string` feature.
- `first_sentence`: a `string` feature.
- `char_idx`: a `int32` feature.
- `sentence_idx`: a `int32` feature.
- `tokenizations`: a dictionary feature containing:
- `feature`: a `int32` feature.
- `answer`: a `string` feature.
- `page`: a `string` feature.
- `raw_answer`: a `string` feature.
- `fold`: a `string` feature.
- `gameplay`: a `bool` feature.
- `category`: a `string` feature.
- `subcategory`: a `string` feature.
- `tournament`: a `string` feature.
- `difficulty`: a `string` feature.
- `year`: a `int32` feature.
### Data Splits
| name |adversarial|buzzdev|buzztrain|guessdev|guesstrain|buzztest|guesstest|
|-----------------------|----------:|------:|--------:|-------:|---------:|-------:|--------:|
|mode=first,char_skip=25| 1145| 1161| 16706| 1055| 96221| 1953| 2151|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Rodriguez2019QuizbowlTC,
title={Quizbowl: The Case for Incremental Question Answering},
author={Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan L. Boyd-Graber},
journal={ArXiv},
year={2019},
volume={abs/1904.04792}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
community-datasets/qa4mre | community-datasets | 2024-06-26T05:31:09Z | 1,204 | 4 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:bg",
"language:de",
"language:en",
"language:es",
"language:it",
"language:ro",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- other
language_creators:
- found
language:
- ar
- bg
- de
- en
- es
- it
- ro
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
pretty_name: 'QA4MRE: Question Answering for Machine Reading Evaluation'
dataset_info:
- config_name: 2011.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1747066
num_examples: 120
download_size: 128323
dataset_size: 1747066
- config_name: 2011.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1569624
num_examples: 120
download_size: 115284
dataset_size: 1569624
- config_name: 2011.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1694408
num_examples: 120
download_size: 125016
dataset_size: 1694408
- config_name: 2011.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1667136
num_examples: 120
download_size: 125939
dataset_size: 1667136
- config_name: 2011.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1740367
num_examples: 120
download_size: 129859
dataset_size: 1740367
- config_name: 2012.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1637936
num_examples: 40
download_size: 96156
dataset_size: 1637936
- config_name: 2012.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2710604
num_examples: 160
download_size: 168781
dataset_size: 2710604
- config_name: 2012.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3454163
num_examples: 160
download_size: 203734
dataset_size: 3454163
- config_name: 2012.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2087414
num_examples: 160
download_size: 158289
dataset_size: 2087414
- config_name: 2012.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1757534
num_examples: 160
download_size: 137169
dataset_size: 1757534
- config_name: 2012.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2057350
num_examples: 160
download_size: 152867
dataset_size: 2057350
- config_name: 2012.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2071658
num_examples: 160
download_size: 156976
dataset_size: 2071658
- config_name: 2012.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2074878
num_examples: 160
download_size: 157140
dataset_size: 2074878
- config_name: 2013.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2614760
num_examples: 40
download_size: 134481
dataset_size: 2614760
- config_name: 2013.entrance_exam.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 180775
num_examples: 46
download_size: 57454
dataset_size: 180775
- config_name: 2013.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 4180927
num_examples: 284
download_size: 155246
dataset_size: 4180927
- config_name: 2013.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 5403194
num_examples: 284
download_size: 191476
dataset_size: 5403194
- config_name: 2013.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2887814
num_examples: 284
download_size: 134883
dataset_size: 2887814
- config_name: 2013.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3449641
num_examples: 284
download_size: 151478
dataset_size: 3449641
- config_name: 2013.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3362997
num_examples: 284
download_size: 154236
dataset_size: 3362997
configs:
- config_name: 2011.main.DE
data_files:
- split: train
path: 2011.main.DE/train-*
- config_name: 2011.main.EN
data_files:
- split: train
path: 2011.main.EN/train-*
- config_name: 2011.main.ES
data_files:
- split: train
path: 2011.main.ES/train-*
- config_name: 2011.main.IT
data_files:
- split: train
path: 2011.main.IT/train-*
- config_name: 2011.main.RO
data_files:
- split: train
path: 2011.main.RO/train-*
- config_name: 2012.alzheimers.EN
data_files:
- split: train
path: 2012.alzheimers.EN/train-*
- config_name: 2012.main.AR
data_files:
- split: train
path: 2012.main.AR/train-*
- config_name: 2012.main.BG
data_files:
- split: train
path: 2012.main.BG/train-*
- config_name: 2012.main.DE
data_files:
- split: train
path: 2012.main.DE/train-*
- config_name: 2012.main.EN
data_files:
- split: train
path: 2012.main.EN/train-*
- config_name: 2012.main.ES
data_files:
- split: train
path: 2012.main.ES/train-*
- config_name: 2012.main.IT
data_files:
- split: train
path: 2012.main.IT/train-*
- config_name: 2012.main.RO
data_files:
- split: train
path: 2012.main.RO/train-*
- config_name: 2013.alzheimers.EN
data_files:
- split: train
path: 2013.alzheimers.EN/train-*
- config_name: 2013.entrance_exam.EN
data_files:
- split: train
path: 2013.entrance_exam.EN/train-*
- config_name: 2013.main.AR
data_files:
- split: train
path: 2013.main.AR/train-*
- config_name: 2013.main.BG
data_files:
- split: train
path: 2013.main.BG/train-*
- config_name: 2013.main.EN
data_files:
- split: train
path: 2013.main.EN/train-*
- config_name: 2013.main.ES
data_files:
- split: train
path: 2013.main.ES/train-*
- config_name: 2013.main.RO
data_files:
- split: train
path: 2013.main.RO/train-*
---
# Dataset Card for "qa4mre"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.uned.es/clef-qa/repository/qa4mre.php
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation](https://link.springer.com/chapter/10.1007/978-3-642-40802-1_29)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.49 MB
- **Size of the generated dataset:** 48.35 MB
- **Total amount of disk used:** 53.84 MB
### Dataset Summary
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in
question answering and reading comprehension. The dataset contains a supporting
passage and a set of questions corresponding to the passage. Multiple options
for answers are provided for each question, of which only one is correct. The
training and test datasets are available for the main track.
Additional gold standard documents are available for two pilot studies: one on
alzheimers data, and the other on entrance exams data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 2011.main.DE
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.75 MB
- **Total amount of disk used:** 1.97 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.EN
- **Size of downloaded dataset files:** 0.20 MB
- **Size of the generated dataset:** 1.57 MB
- **Total amount of disk used:** 1.77 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.ES
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.70 MB
- **Total amount of disk used:** 1.91 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.IT
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 1.67 MB
- **Total amount of disk used:** 1.88 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.RO
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 1.96 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### 2011.main.DE
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.EN
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.ES
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.IT
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.RO
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
### Data Splits
| name |train|
|------------|----:|
|2011.main.DE| 120|
|2011.main.EN| 120|
|2011.main.ES| 120|
|2011.main.IT| 120|
|2011.main.RO| 120|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{10.1007/978-3-642-40802-1_29,
author="Pe{\~{n}}as, Anselmo
and Hovy, Eduard
and Forner, Pamela
and Rodrigo, {\'A}lvaro
and Sutcliffe, Richard
and Morante, Roser",
editor="Forner, Pamela
and M{\"u}ller, Henning
and Paredes, Roberto
and Rosso, Paolo
and Stein, Benno",
title="QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation",
booktitle="Information Access Evaluation. Multilinguality, Multimodality, and Visualization",
year="2013",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="303--320",
isbn="978-3-642-40802-1"
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
community-datasets/ohsumed | community-datasets | 2024-06-26T05:23:19Z | 158 | 5 | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: Ohsumed
dataset_info:
config_name: ohsumed
features:
- name: seq_id
dtype: int64
- name: medline_ui
dtype: int64
- name: mesh_terms
dtype: string
- name: title
dtype: string
- name: publication_type
dtype: string
- name: abstract
dtype: string
- name: author
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 76530191
num_examples: 54709
- name: test
num_bytes: 435681911
num_examples: 293855
download_size: 213196532
dataset_size: 512212102
configs:
- config_name: ohsumed
data_files:
- split: train
path: ohsumed/train-*
- split: test
path: ohsumed/test-*
default: true
---
# Dataset Card for ohsumed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://davis.wpi.edu/xmdv/datasets/ohsumed.html
- **Repository:** https://trec.nist.gov/data/filtering/t9.filtering.tar.gz
- **Paper:** https://link.springer.com/chapter/10.1007/978-1-4471-2099-5_20
- **Leaderboard:**
- **Point of Contact:** [William Hersh](mailto:[email protected]) [Aakash Gupta](mailto:[email protected])
### Dataset Summary
The OHSUMED test collection is a set of 348,566 references from
MEDLINE, the on-line medical information database, consisting of
titles and/or abstracts from 270 medical journals over a five-year
period (1987-1991). The available fields are title, abstract, MeSH
indexing terms, author, source, and publication type. The National
Library of Medicine has agreed to make the MEDLINE references in the
test database available for experimentation, restricted to the
following conditions:
1. The data will not be used in any non-experimental clinical,
library, or other setting.
2. Any human users of the data will explicitly be told that the data
is incomplete and out-of-date.
Please check this [readme](https://trec.nist.gov/data/filtering/README.t9.filtering) for more details
### Supported Tasks and Leaderboards
[Text Classification](https://paperswithcode.com/sota/text-classification-on-ohsumed)
### Languages
The text is primarily in English. The BCP 47 code is `en`
## Dataset Structure
### Data Instances
```
{'seq_id': 7770,
'medline_ui': 87120420,
'mesh_terms': 'Adult; Aged; Aneurysm/CO; Arteriovenous Fistula/*TH; Carotid Arteries; Case Report; Female; Human; Jugular Veins; Male; Methods; Middle Age; Neck/*BS; Vertebral Artery.',
'title': 'Arteriovenous fistulas of the large vessels of the neck: nonsurgical percutaneous occlusion.',
'publication_type': 'JOURNAL ARTICLE.',
'abstract': 'We describe the nonsurgical treatment of arteriovenous fistulas of the large vessels in the neck using three different means of endovascular occlusion of these large lesions, which are surgically difficult to approach and treat.',
'author': 'Vitek JJ; Keller FS.',
'source': 'South Med J 8705; 80(2):196-200'}
```
### Data Fields
Here are the field definitions:
- seg_id: sequential identifier
(important note: documents should be processed in this order)
- medline_ui: MEDLINE identifier (UI)
(<DOCNO> used for relevance judgements)
- mesh_terms: Human-assigned MeSH terms (MH)
- title: Title (TI)
- publication_type : Publication type (PT)
- abstract: Abstract (AB)
- author: Author (AU)
- source: Source (SO)
Note: some abstracts are truncated at 250 words and some references
have no abstracts at all (titles only). We do not have access to the
full text of the documents.
### Data Splits
The files are Train/ Test. Where the training has files from 1987 while the test files has abstracts from 1988-91
Total number of files:
Train: 54710
Test: 348567
## Dataset Creation
### Curation Rationale
The OHSUMED document collection was obtained by William Hersh
([email protected]) and colleagues for the experiments described in the
papers below. [Check citation](#citation-information)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The test collection was built as part of a study assessing the use of
MEDLINE by physicians in a clinical setting (Hersh and Hickam, above).
Novice physicians using MEDLINE generated 106 queries. Only a subset
of these queries were used in the TREC-9 Filtering Track. Before
they searched, they were asked to provide a statement of information
about their patient as well as their information need.
The data was collected by William Hersh & colleagues
### Annotations
#### Annotation process
The existing OHSUMED topics describe actual information needs, but the
relevance judgements probably do not have the same coverage provided
by the TREC pooling process. The MeSH terms do not directly represent
information needs, rather they are controlled indexing terms. However,
the assessment should be more or less complete and there are a lot of
them, so this provides an unusual opportunity to work with a very
large topic sample.
The topic statements are provided in the standard TREC format
#### Who are the annotators?
Each query was replicated by four searchers, two physicians
experienced in searching and two medical librarians. The results were
assessed for relevance by a different group of physicians, using a
three point scale: definitely, possibly, or not relevant. The list of
documents explicitly judged to be not relevant is not provided here.
Over 10% of the query-document pairs were judged in duplicate to
assess inter-observer reliability. For evaluation, all documents
judged here as either possibly or definitely relevant were
considered relevant. TREC-9 systems were allowed to distinguish
between these two categories during the learning process if desired.
### Personal and Sensitive Information
No PII data is present in the train, test or query files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Aakash Gupta](mailto:[email protected])
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
### Licensing Information
CC BY-NC 4.0
### Citation Information
Hersh WR, Buckley C, Leone TJ, Hickam DH, OHSUMED: An interactive
retrieval evaluation and new large test collection for research,
Proceedings of the 17th Annual ACM SIGIR Conference, 1994, 192-201.
Hersh WR, Hickam DH, Use of a multi-application computer workstation
in a clinical setting, Bulletin of the Medical Library Association,
1994, 82: 382-389.
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset. |
community-datasets/yahoo_answers_topics | community-datasets | 2024-06-24T04:59:52Z | 1,669 | 54 | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-yahoo-answers-corpus",
"language:en",
"license:unknown",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended|other-yahoo-answers-corpus
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: YahooAnswersTopics
dataset_info:
config_name: yahoo_answers_topics
features:
- name: id
dtype: int32
- name: topic
dtype:
class_label:
names:
'0': Society & Culture
'1': Science & Mathematics
'2': Health
'3': Education & Reference
'4': Computers & Internet
'5': Sports
'6': Business & Finance
'7': Entertainment & Music
'8': Family & Relationships
'9': Politics & Government
- name: question_title
dtype: string
- name: question_content
dtype: string
- name: best_answer
dtype: string
splits:
- name: train
num_bytes: 760285695
num_examples: 1400000
- name: test
num_bytes: 32653862
num_examples: 60000
download_size: 533429663
dataset_size: 792939557
configs:
- config_name: yahoo_answers_topics
data_files:
- split: train
path: yahoo_answers_topics/train-*
- split: test
path: yahoo_answers_topics/test-*
default: true
train-eval-index:
- config: yahoo_answers_topics
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
question_content: text
topic: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "Yahoo Answers Topics"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
hf-internal-testing/librispeech_asr_dummy | hf-internal-testing | 2024-06-19T14:41:44Z | 70,751 | 4 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-03-02T23:29:22Z | null | ---
dataset_info:
config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: validation
num_bytes: 9677021.0
num_examples: 73
download_size: 9192059
dataset_size: 9677021.0
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
---
|
BAAI/TACO | BAAI | 2024-06-19T09:16:49Z | 2,808 | 109 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"language:code",
"license:apache-2.0",
"size_categories:10K<n<100K",
"arxiv:2312.14852",
"region:us",
"code"
] | [
"text-generation"
] | 2023-12-20T11:27:47Z | null | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: taco-topics-in-algorithmic-code-generation
pretty_name: TACO
tags:
- code
dataset_info:
config_name: ALL
features:
- name: question
dtype: string
- name: solutions
dtype: string
- name: starter_code
dtype: string
- name: input_output
dtype: string
- name: difficulty
dtype: string
- name: raw_tags
dtype: string
- name: name
dtype: string
- name: source
dtype: string
- name: tags
dtype: string
- name: skill_types
dtype: string
- name: url
dtype: string
- name: Expected Auxiliary Space
dtype: string
- name: time_limit
dtype: string
- name: date
dtype: string
- name: picture_num
dtype: string
- name: memory_limit
dtype: string
- name: Expected Time Complexity
dtype: string
splits:
- name: train
num_bytes: 4239311973
num_examples: 25443
- name: test
num_bytes: 481480755
num_examples: 1000
download_size: 2419844942
dataset_size: 4720792728
configs:
- config_name: ALL
data_files:
- split: train
path: ALL/train-*
- split: test
path: ALL/test-*
---
# TACO Dataset
<img src="https://cdn-uploads.huggingface.co/production/uploads/6335113375bed9932474315e/rMxdXcC56S3FEh37oRa2s.png" width="200" height="200">
[TACO](https://github.com/FlagOpen/TACO) is a benchmark for code generation with 26443 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications.
## Key Update:
We remove and modified some test cases in test set. Please update to use the newest version.
## Dataset Description
- **Repository:** https://github.com/FlagOpen/TACO/
- **Paper:** [TACO: Topics in Algorithmic COde generation dataset](https://arxiv.org/abs/2312.14852)
- **Leaderboard:** [Code Generation on CodeContests](https://paperswithcode.com/sota/code-generation-on-taco-code)
- **Point of Contact:** [Bo-Wen Zhang](mailto:[email protected])
## Languages
The dataset contains questions in English and code solutions in Python.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("BAAI/TACO")
DatasetDict({
train: Dataset({
features: ['question', 'solutions', 'starter_code', 'input_output', 'difficulty', 'raw_tags', 'name', 'source', 'tags', 'skill_types', 'url', 'Expected Auxiliary Space', 'time_limit', 'date', 'picture_num', 'memory_limit', 'Expected Time Complexity'],
num_rows: 25443
})
test: Dataset({
features: ['question', 'solutions', 'starter_code', 'input_output', 'difficulty', 'raw_tags', 'name', 'source', 'tags', 'skill_types', 'url', 'Expected Auxiliary Space', 'time_limit', 'date', 'picture_num', 'memory_limit', 'Expected Time Complexity'],
num_rows: 1000
})
})
```
### How to use it
You can load and iterate through the dataset with the following two lines of code for the train split:
```python
from datasets import load_dataset
import json
ds = load_dataset("BAAI/TACO", split="train")
sample = next(iter(ds))
# non-empty solutions and input_output features can be parsed from text format this way:
sample["solutions"] = json.loads(sample["solutions"])
sample["input_output"] = json.loads(sample["input_output"])
sample["raw_tags"] = eval(sample["raw_tags"])
sample["tags"] = eval(sample["tags"])
sample["skill_types"] = eval(sample["skill_types"])
print(sample)
#OUTPUT:
{
"question": "You have a deck of $n$ cards, and you'd like to reorder it to a new one.\n\nEach card has a value between $1$ and $n$ equal to $p_i$. ...",
"solutions": [
"import heapq\nfrom math import sqrt\nimport operator\nimport sys\ninf_var = 0\nif inf_var == 1:\n\tinf = open('input.txt', 'r')\nelse:\n\tinf = sys.stdin\n ...",
"t = int(input())\nfor _ in range(t):\n\tn = int(input())\n\tp = list(map(int, input().split()))\n\tans = []\n\tp1 = [-1] * (n + 1)\n\tfor i in range(n):\n\t\tp1[p[i]] = i\n\ti = n\n\twhile i:\n\t\twhile i > 0 and p1[i] == -1:\n\t\t\ti -= 1\n\t\telse:\n\t\t\tif i:\n\t\t\t\tk = 0\n\t\t\t\tfor j in range(p1[i], n):\n\t\t\t\t\tans.append(p[j])\n\t\t\t\t\tp1[p[j]] = -1\n\t\t\t\t\tk += 1\n\t\t\t\tn -= k\n\t\t\t\ti -= 1\n\t\t\telse:\n\t\t\t\tbreak\n\tprint(*ans)\n",
"import sys\n\ndef get_ints():\n\treturn map(int, sys.stdin.readline().strip().split())\n\ndef get_list():\n\treturn list(map(int, sys.stdin.readline().strip().split()))\n\ndef get_list_string():\n\treturn list(map(str, sys.stdin.readline().strip().split()))\n\ndef get_string():\n\treturn sys.stdin.readline().strip()\n\ndef get_int():\n\treturn int(sys.stdin.readline().strip())\n\ndef get_print_int(x):\n\tsys.stdout.write(str(x) + '\\n')\n\ndef get_print(x):\n\tsys.stdout.write(x + '\\n')\n\ndef get_print_int_same(x):\n\tsys.stdout.write(str(x) + ' ')\n\ndef get_print_same(x):\n\tsys.stdout.write(x + ' ')\nfrom sys import maxsize\n\ndef solve():\n\tfor _ in range(get_int()):\n\t\tn = get_int()\n\t\tarr = get_list()\n\t\ti = n - 1\n\t\tj = n - 1\n\t\ttemp = sorted(arr)\n\t\tvis = [False] * n\n\t\tans = []\n\t\twhile j >= 0:\n\t\t\tt = j\n\t\t\ttt = []\n\t\t\twhile t >= 0 and arr[t] != temp[i]:\n\t\t\t\tvis[arr[t] - 1] = True\n\t\t\t\ttt.append(arr[t])\n\t\t\t\tt -= 1\n\t\t\tvis[arr[t] - 1] = True\n\t\t\ttt.append(arr[t])\n\t\t\ttt = tt[::-1]\n\t\t\tfor k in tt:\n\t\t\t\tans.append(k)\n\t\t\tj = t - 1\n\t\t\twhile i >= 0 and vis[i]:\n\t\t\t\ti -= 1\n\t\tget_print(' '.join(map(str, ans)))\nsolve()\n",
...
],
"starter_code": "",
"input_output": {
"inputs": [
"4\n4\n1 2 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n",
"4\n4\n2 1 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n",
"4\n4\n2 1 3 4\n5\n1 5 2 4 3\n6\n2 4 5 3 6 1\n1\n1\n",
"4\n4\n1 2 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n"
],
"outputs": [
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"\n4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n"
]
},
"difficulty": "EASY",
"raw_tags": [
"data structures",
"greedy",
"math"
],
"name": null,
"source": "codeforces",
"tags": [
"Data structures",
"Mathematics",
"Greedy algorithms"
],
"skill_types": [
"Data structures",
"Greedy algorithms"
],
"url": "https://codeforces.com/problemset/problem/1492/B",
"Expected Auxiliary Space": null,
"time_limit": "1 second",
"date": "2021-02-23",
"picture_num": "0",
"memory_limit": "512 megabytes",
"Expected Time Complexity": null
}
```
Each sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill_types) of the problem and its source.
If a sample has non empty `input_output` feature, you can read it as a dictionary with keys `inputs` and `outputs` and `fn_name` if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.
You can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM_HARD, HARD and VERY_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY_HARD level:
```python
ds = load_dataset("BAAI/TACO", split="train", difficulties=["VERY_HARD"])
print(next(iter(ds))["question"])
```
```
#OUTPUT:
"""Let S(n) denote the number that represents the digits of n in sorted order. For example, S(1) = 1, S(5) = 5, S(50394) = 3459, S(353535) = 333555.
Given a number X, compute <image> modulo 109 + 7.
Input
The first line of input will contain the integer X (1 ≤ X ≤ 10700).
Output
Print a single integer, the answer to the question.
Examples
Input
21
Output
195
Input
345342
Output
390548434
Note
The first few values of S are 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 12. The sum of these values is 195.
```
Or if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:
```python
ds = load_dataset("BAAI/TACO", split="train", skills=["Range queries", "Sorting"])
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|question|string|problem description|
|solutions|string|some python solutions|
|input_output|string|Json string with "inputs" and "outputs" of the test cases, might also include "fn_name" the name of the function|
|difficulty|string|difficulty level of the problem|
|picture_num|string|the number of pictures in the problem|
|source|string|the source of the problem|
|url|string|url of the source of the problem|
|date|string|the date of the problem|
|starter_code|string|starter code to include in prompts|
|time_limit|string|the time consumption limit to solve the problem|
|memory_limit|string|the memory consumption limit to solve the problem|
|Expected Auxiliary Space|string|the extra auxiliary space expected to solve the problem|
|Expected Time Complexity|string|the time complexity expected to solve the problem|
|raw_tags|string|the topics of the programming task|
|tags|string|the manually annoatated algorithms needed to solve the problem|
|skill_types|string|the mapped programming skill types to solve the problem|
### Data Splits
The dataset contains a train with 25443 samples and test splits with 1000 samples.
### Dataset Statistics
* 26443 coding problems
* 1.55M verified solutions
* for tests split, the average number of test cases is 202.3
* all files have ground-truth solutions in the test split
## Dataset Creation
To create the TACO dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Aizu
AtCoder, CodeChef, Codeforces, CodeWars, GeeksforGeeks, HackerEarth, HackerRank, Katti and LeetCode. For more details please refer to the original paper.
## License
The TACO dataset that is authored by BAAI, Shandong Normal University and Peking University is released under an [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). However, the data also includes content licensed under other permissive licenses such as MIT License, or web-crawled data which is used under the terms of the CC BY 4.0 license ([Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/legalcode)).
We gratefully acknowledge the contributions of the following:
* some AtCoder, Codeforces, CodeWars, Kattis, LeetCode material curated from APPS dataset (https://github.com/hendrycks/apps)
* some Aizu, AtCoder, CodeChef, Codeforces material curated from CodeContest dataset (https://github.com/google-deepmind/code_contests)
* Codeforces materials are sourced from http://codeforces.com.
* CodeChef materials are sourced from https://www.codechef.com.
* GeekforGeeks materials are sourced from https://www.geeksforgeeks.org
* HackerEarth materials are curated from:
[Description2Code Dataset](https://github.com/ethancaballero/description2code),
licensed under the
[MIT open source license](https://opensource.org/licenses/MIT), copyright
not specified.
* HackerRank materials are sourced from https://www.hackerrank.com. We don't know what the legal rights or data licenses of HackerRank. Please contact us if there is data license.
## Citation Information
If you find our data, or code helpful, please cite [the original paper](https://arxiv.org/abs/2312.14852):
```
@article{li2023taco,
title={TACO: Topics in Algorithmic COde generation dataset},
author={Rongao Li and Jie Fu and Bo-Wen Zhang and Tao Huang and Zhihong Sun and Chen Lyu and Guang Liu and Zhi Jin and Ge Li},
journal={arXiv preprint arXiv:2312.14852},
year={2023}
}
``` |
toxigen/toxigen-data | toxigen | 2024-06-17T16:35:29Z | 4,574 | 55 | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.09509",
"region:us"
] | [
"text-classification"
] | 2022-05-01T15:49:02Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
pretty_name: ToxiGen
dataset_info:
- config_name: annotated
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
splits:
- name: test
num_bytes: 364518
num_examples: 940
- name: train
num_bytes: 3238381
num_examples: 8960
download_size: 768996
dataset_size: 3602899
- config_name: annotations
features:
- name: Input.prompt
dtype: string
- name: Input.text
dtype: string
- name: Input.time
dtype: string
- name: Input.generation_method
dtype: string
- name: Input.prompt_label
dtype: string
- name: Input.target_group
dtype: string
- name: Input.binary_prompt_label
dtype: int64
- name: Answer.annotatorAge
dtype: string
- name: Answer.annotatorGender
dtype: string
- name: Answer.annotatorMinority
dtype: string
- name: Answer.annotatorPolitics.1
dtype: bool
- name: Answer.annotatorPolitics.2
dtype: bool
- name: Answer.annotatorPolitics.3
dtype: bool
- name: Answer.annotatorPolitics.4
dtype: bool
- name: Answer.annotatorPolitics.5
dtype: bool
- name: Answer.annotatorRace
dtype: string
- name: Answer.factSelect
dtype: string
- name: Answer.framingQ
dtype: string
- name: Answer.inGroup.on
dtype: bool
- name: Answer.ingroup.1
dtype: bool
- name: Answer.ingroup.2
dtype: bool
- name: Answer.ingroup.3
dtype: bool
- name: Answer.intent.1
dtype: bool
- name: Answer.intent.2
dtype: bool
- name: Answer.intent.3
dtype: bool
- name: Answer.intent.4
dtype: bool
- name: Answer.intent.5
dtype: bool
- name: Answer.lewd.1
dtype: bool
- name: Answer.lewd.2
dtype: bool
- name: Answer.lewd.3
dtype: bool
- name: Answer.refTarget
dtype: string
- name: Answer.stateFrame
dtype: string
- name: Answer.stateGroup
dtype: string
- name: Answer.stereo.1
dtype: bool
- name: Answer.stereo.2
dtype: bool
- name: Answer.stereo.3
dtype: bool
- name: Answer.toAI.1
dtype: bool
- name: Answer.toAI.2
dtype: bool
- name: Answer.toAI.3
dtype: bool
- name: Answer.toAI.4
dtype: bool
- name: Answer.toAI.5
dtype: bool
- name: Answer.toPER.1
dtype: bool
- name: Answer.toPER.2
dtype: bool
- name: Answer.toPER.3
dtype: bool
- name: Answer.toPER.4
dtype: bool
- name: Answer.toPER.5
dtype: bool
- name: Answer.writer.1
dtype: bool
- name: Answer.writer.2
dtype: bool
- name: HashedWorkerId
dtype: int64
splits:
- name: train
num_bytes: 21933185
num_examples: 27450
download_size: 3350653
dataset_size: 21933185
- config_name: prompts
features:
- name: text
dtype: string
splits:
- name: hate_trans_1k
num_bytes: 585554
num_examples: 1000
- name: neutral_black_1k
num_bytes: 857769
num_examples: 1000
- name: hate_native_american_1k
num_bytes: 480000
num_examples: 1000
- name: neutral_immigrant_1k
num_bytes: 342243
num_examples: 1000
- name: hate_middle_east_1k
num_bytes: 426551
num_examples: 1000
- name: neutral_lgbtq_1k
num_bytes: 914319
num_examples: 1000
- name: neutral_women_1k
num_bytes: 394963
num_examples: 1000
- name: neutral_chinese_1k
num_bytes: 412062
num_examples: 1000
- name: hate_latino_1k
num_bytes: 708000
num_examples: 1000
- name: hate_bisexual_1k
num_bytes: 447794
num_examples: 1000
- name: hate_mexican_1k
num_bytes: 675444
num_examples: 1000
- name: hate_asian_1k
num_bytes: 503093
num_examples: 1000
- name: neutral_mental_disability_1k
num_bytes: 556905
num_examples: 1000
- name: neutral_mexican_1k
num_bytes: 483603
num_examples: 1000
- name: hate_mental_disability_1k
num_bytes: 480620
num_examples: 1000
- name: neutral_bisexual_1k
num_bytes: 915612
num_examples: 1000
- name: neutral_latino_1k
num_bytes: 470000
num_examples: 1000
- name: hate_chinese_1k
num_bytes: 384934
num_examples: 1000
- name: neutral_jewish_1k
num_bytes: 649674
num_examples: 1000
- name: hate_muslim_1k
num_bytes: 425760
num_examples: 1000
- name: neutral_asian_1k
num_bytes: 615895
num_examples: 1000
- name: hate_physical_disability_1k
num_bytes: 413643
num_examples: 1000
- name: hate_jewish_1k
num_bytes: 573538
num_examples: 1000
- name: neutral_muslim_1k
num_bytes: 491659
num_examples: 1000
- name: hate_immigrant_1k
num_bytes: 285309
num_examples: 1000
- name: hate_black_1k
num_bytes: 745295
num_examples: 1000
- name: hate_lgbtq_1k
num_bytes: 577075
num_examples: 1000
- name: hate_women_1k
num_bytes: 389583
num_examples: 1000
- name: neutral_middle_east_1k
num_bytes: 415319
num_examples: 1000
- name: neutral_native_american_1k
num_bytes: 586993
num_examples: 1000
- name: neutral_physical_disability_1k
num_bytes: 458497
num_examples: 1000
download_size: 1698170
dataset_size: 16667706
- config_name: train
features:
- name: prompt
dtype: string
- name: generation
dtype: string
- name: generation_method
dtype: string
- name: group
dtype: string
- name: prompt_label
dtype: int64
- name: roberta_prediction
dtype: float64
splits:
- name: train
num_bytes: 169400442
num_examples: 250951
download_size: 18784380
dataset_size: 169400442
configs:
- config_name: annotated
default: true
data_files:
- split: test
path: annotated/test-*
- split: train
path: annotated/train-*
- config_name: annotations
data_files:
- split: train
path: annotations/train-*
- config_name: prompts
data_files:
- split: hate_trans_1k
path: prompts/hate_trans_1k-*
- split: neutral_black_1k
path: prompts/neutral_black_1k-*
- split: hate_native_american_1k
path: prompts/hate_native_american_1k-*
- split: neutral_immigrant_1k
path: prompts/neutral_immigrant_1k-*
- split: hate_middle_east_1k
path: prompts/hate_middle_east_1k-*
- split: neutral_lgbtq_1k
path: prompts/neutral_lgbtq_1k-*
- split: neutral_women_1k
path: prompts/neutral_women_1k-*
- split: neutral_chinese_1k
path: prompts/neutral_chinese_1k-*
- split: hate_latino_1k
path: prompts/hate_latino_1k-*
- split: hate_bisexual_1k
path: prompts/hate_bisexual_1k-*
- split: hate_mexican_1k
path: prompts/hate_mexican_1k-*
- split: hate_asian_1k
path: prompts/hate_asian_1k-*
- split: neutral_mental_disability_1k
path: prompts/neutral_mental_disability_1k-*
- split: neutral_mexican_1k
path: prompts/neutral_mexican_1k-*
- split: hate_mental_disability_1k
path: prompts/hate_mental_disability_1k-*
- split: neutral_bisexual_1k
path: prompts/neutral_bisexual_1k-*
- split: neutral_latino_1k
path: prompts/neutral_latino_1k-*
- split: hate_chinese_1k
path: prompts/hate_chinese_1k-*
- split: neutral_jewish_1k
path: prompts/neutral_jewish_1k-*
- split: hate_muslim_1k
path: prompts/hate_muslim_1k-*
- split: neutral_asian_1k
path: prompts/neutral_asian_1k-*
- split: hate_physical_disability_1k
path: prompts/hate_physical_disability_1k-*
- split: hate_jewish_1k
path: prompts/hate_jewish_1k-*
- split: neutral_muslim_1k
path: prompts/neutral_muslim_1k-*
- split: hate_immigrant_1k
path: prompts/hate_immigrant_1k-*
- split: hate_black_1k
path: prompts/hate_black_1k-*
- split: hate_lgbtq_1k
path: prompts/hate_lgbtq_1k-*
- split: hate_women_1k
path: prompts/hate_women_1k-*
- split: neutral_middle_east_1k
path: prompts/neutral_middle_east_1k-*
- split: neutral_native_american_1k
path: prompts/neutral_native_american_1k-*
- split: neutral_physical_disability_1k
path: prompts/neutral_physical_disability_1k-*
- config_name: train
data_files:
- split: train
path: train/train-*
---
# Dataset Card for ToxiGen
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-instances)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Sign up for Data Access
To access ToxiGen, first fill out [this form](https://forms.office.com/r/r6VXX8f8vh).
## Dataset Description
- **Repository:** https://github.com/microsoft/toxigen
- **Paper:** https://arxiv.org/abs/2203.09509
- **Point of Contact #1:** [Tom Hartvigsen]([email protected])
- **Point of Contact #2:** [Saadia Gabriel]([email protected])
### Dataset Summary
This dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in [our paper](https://arxiv.org/abs/2203.09509).
### Languages
All text is written in English.
## Dataset Structure
### Data Fields
We release TOXIGEN as a dataframe with the following fields:
- **prompt** is the prompt used for **generation**.
- **generation** is the TOXIGEN generated text.
- **generation_method** denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.
- **prompt_label** is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).
- **group** indicates the target group of the prompt.
- **roberta_prediction** is the probability predicted by our corresponding RoBERTa model for each instance.
### Citation Information
```bibtex
@inproceedings{hartvigsen2022toxigen,
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022}
}
```
|
togethercomputer/RedPajama-Data-1T | togethercomputer | 2024-06-17T11:36:03Z | 1,926 | 1,084 | [
"task_categories:text-generation",
"language:en",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | 2023-04-17T06:28:35Z | null | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T
---
### Getting Started
The dataset consists of 2084 jsonl files.
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
```
Or you can directly download the files using the following command:
```
wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
while read line; do
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
mkdir -p $(dirname $dload_loc)
wget "$line" -O "$dload_loc"
done < urls.txt
```
After downloading the files, you can load the dataset from disk by setting the `RED_PAJAMA_DATA_DIR` environment variable to the directory containing the files:
```python
import os
from datasets import load_dataset
os.environ["RED_PAJAMA_DATA_DIR"] = "/path/to/download"
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
```
A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```json
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
"red_pajama_subset": "common_crawl" | "c4" | "github" | "arxiv" | "wikipedia" | "stackexchange"
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> The 'book' config is defunct and no longer accessible due to reported copyright infringement for the Book3 dataset contained in this config.</p>
</div>
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
### SHA256 Checksums
SHA256 checksums for the dataset files for each data source are available here:
```
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/arxiv_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/c4_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/common_crawl_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/github_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/stackexchange_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/wikipedia_SHA256SUMS.txt
```
To cite RedPajama, please use:
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
### License
Please refer to the licenses of the data subsets you use.
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
* GitHub was limited to MIT, BSD, or Apache licenses only
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
google-research-datasets/conceptual_captions | google-research-datasets | 2024-06-17T10:51:29Z | 7,082 | 94 | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text"
] | 2022-04-14T13:08:21Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: conceptual-captions
pretty_name: Conceptual Captions
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: caption
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 623230370
num_examples: 3318333
- name: validation
num_bytes: 2846024
num_examples: 15840
download_size: 0
dataset_size: 626076394
- config_name: labeled
features:
- name: image_url
dtype: string
- name: caption
dtype: string
- name: labels
sequence: string
- name: MIDs
sequence: string
- name: confidence_scores
sequence: float64
splits:
- name: train
num_bytes: 1199325228
num_examples: 2007090
download_size: 532762865
dataset_size: 1199325228
- config_name: unlabeled
features:
- name: image_url
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 584517500
num_examples: 3318333
- name: validation
num_bytes: 2698710
num_examples: 15840
download_size: 375258708
dataset_size: 587216210
configs:
- config_name: labeled
data_files:
- split: train
path: labeled/train-*
- config_name: unlabeled
data_files:
- split: train
path: unlabeled/train-*
- split: validation
path: unlabeled/validation-*
default: true
---
# Dataset Card for Conceptual Captions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/)
- **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions)
- **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/)
- **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard
- **Point of Contact:** [Conceptual Captions e-mail](mailto:[email protected])
### Dataset Summary
Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("google-research-datasets/conceptual_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available [here](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard). Official submission output captions are scored against the reference captions from the hidden test set using [this](https://github.com/tylin/coco-caption) implementation of the CIDEr (primary), ROUGE-L and SPICE metrics.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
#### `unlabeled`
Each instance in this configuration represents a single image with a caption:
```
{
'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
'caption': 'a very typical bus station'
}
```
#### `labeled`
Each instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores:
```
{
'image_url': 'https://thumb1.shutterstock.com/display_pic_with_logo/261388/223876810/stock-vector-christmas-tree-on-a-black-background-vector-223876810.jpg',
'caption': 'christmas tree on a black background .',
'labels': ['christmas tree', 'christmas decoration', 'font', 'text', 'graphic design', 'illustration','interior design', 'tree', 'christmas eve', 'ornament', 'fir', 'plant', 'pine', 'pine family', 'graphics'],
'MIDs': ['/m/025nd', '/m/05fc9mj', '/m/03gq5hm', '/m/07s6nbt', '/m/03c31', '/m/01kr8f', '/m/0h8nzzj', '/m/07j7r', '/m/014r1s', '/m/05ykl4', '/m/016x4z', '/m/05s2s', '/m/09t57', '/m/01tfm0', '/m/021sdg'],
'confidence_scores': [0.9818305373191833, 0.952756941318512, 0.9227379560470581, 0.8524878621101379, 0.7597672343254089, 0.7493422031402588, 0.7332468628883362, 0.6869218349456787, 0.6552258133888245, 0.6357356309890747, 0.5992692708969116, 0.585474967956543, 0.5222904086112976, 0.5113164782524109, 0.5036579966545105]
}
```
### Data Fields
#### `unlabeled`
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
#### `labeled`
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
- `labels`: A sequence of machine-generated labels obtained using the [Google Cloud Vision API](https://cloud.google.com/vision).
- `MIDs`: A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry.
- `confidence_scores`: A sequence of confidence scores denoting how likely the corresponing labels are present on the image.
### Data Splits
#### `unlabeled`
The basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs.
#### `labeled`
The labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the `unlabeled` config.
## Dataset Creation
### Curation Rationale
From the paper:
> In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.
### Source Data
#### Initial Data Collection and Normalization
From the homepage:
>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.
>
>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.
>
>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.
>
>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.
>
>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.
>
>We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.
>
>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.
#### Who are the source language producers?
Not specified.
### Annotations
#### Annotation process
Annotations are extracted jointly with the images using the automatic pipeline.
#### Who are the annotators?
Not specified.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) and [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
NousResearch/CharacterCodex | NousResearch | 2024-06-17T01:27:49Z | 641 | 220 | [
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"language model"
] | [] | 2024-06-05T00:30:38Z | null | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
pretty_name: CharacterCodex
dataset_info:
features:
- name: media_type
dtype: string
- name: genre
dtype: string
- name: character_name
dtype: string
- name: media_source
dtype: string
- name: description
dtype: string
- name: scenario
dtype: string
tags:
- language model
---
# Dataset Card for Character Codex

## Dataset Summary
The Character Codex is a comprehensive dataset featuring popular characters from a wide array of media types and genres. Each entry includes detailed information about the character, the media source, and a unique scenario involving the character. This dataset is valuable for synthetic data, RAG for generative AI, writers, game developers, and fans who want to explore and utilize rich character descriptions for various creative projects.
## Dataset Structure
### Data Fields
- **media_type**: The type of media the character originates from (e.g., Webcomics, Novels, Movies, TV Shows).
- **genre**: The specific genre of the media type (e.g., Fantasy Webcomics, Martial Arts Fiction).
- **character_name**: The name of the character.
- **media_source**: The title of the media source where the character is from.
- **description**: A detailed description of the character, including their role and significance in the story.
- **scenario**: A creative scenario involving the character that can be used for interactive storytelling or role-playing purposes.
### Example Data
```json
[
{
"media_type": "Webcomics",
"genre": "Fantasy Webcomics",
"character_name": "Alana",
"media_source": "Saga",
"description": "Alana is one of the main characters from the webcomic \"Saga.\" She is a strong-willed and fiercely protective mother who is on the run with her family in a war-torn galaxy. The story blends elements of fantasy and science fiction, creating a rich and complex narrative.",
"scenario": "You are a fellow traveler in the galaxy needing help, and Alana offers her assistance while sharing stories of her family's struggles and triumphs."
},
{
"media_type": "Novels",
"genre": "Martial Arts Fiction",
"character_name": "Yilin",
"media_source": "The Smiling, Proud Wanderer",
"description": "Yilin is a young nun from the Hengshan Sect in Jin Yong's novel \"The Smiling, Proud Wanderer.\" Known for her innocence and kindness, she becomes friends with the protagonist Linghu Chong. Her gentle nature often puts her at odds with the violent world of martial arts.",
"scenario": "You are a fellow disciple of the Hengshan Sect seeking Yilin's comfort and advice after a particularly brutal conflict. Her gentle demeanor and compassionate words provide solace in a harsh world."
}
]
```
# Usage
## Accessing the Data
To load the dataset in your project, you can use the following code snippet:
```python
from datasets import load_dataset
dataset = load_dataset("NousResearch/CharacterCodex")
```
## Use Cases
- Seed Data: Useful for generating synthetic data or use in interactive experiences with generative AI.
- Creative Writing: Use the detailed character descriptions and scenarios to inspire creative writing projects.
- Educational: Study character development and storytelling techniques from various genres and media types.
# Dataset Creation
## Data Collection
The characters in this dataset were meticulously selected from a diverse range of media, ensuring a rich and varied collection. The descriptions and scenarios were crafted to provide insightful and engaging context for each character.
## Annotations
Each character entry includes:
- The media type (i.e. Novel, Magazine, Anime), the genre (i.e. action, historical), and the specific source/title of the media they are from (i.e. Pokemon)
- A detailed description highlighting the character's role, traits, and significance.
- A scenario designed to stimulate interactive and immersive experiences.
# Citation
```bibtex
@dataset{character_codex_2024,
title={Character Codex},
author={"Teknium"},
year={2024},
note={https://huggingface.co/datasets/NousResearch/CharacterCodex}
}
``` |
ikala/tmmluplus | ikala | 2024-06-12T07:06:00Z | 3,202 | 115 | [
"task_categories:question-answering",
"language:zh",
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"traditional chinese",
"finance",
"medical",
"taiwan",
"benchmark",
"zh-tw",
"zh-hant"
] | [
"question-answering"
] | 2023-12-22T19:12:13Z | null | ---
license: mit
license_name: mit
task_categories:
- question-answering
language:
- zh
tags:
- traditional chinese
- finance
- medical
- taiwan
- benchmark
- zh-tw
- zh-hant
pretty_name: tmmlu++
size_categories:
- 100K<n<1M
configs:
- config_name: engineering_math
data_files:
- split: train
path: "data/engineering_math_dev.csv"
- split: validation
path: "data/engineering_math_val.csv"
- split: test
path: "data/engineering_math_test.csv"
- config_name: dentistry
data_files:
- split: train
path: "data/dentistry_dev.csv"
- split: validation
path: "data/dentistry_val.csv"
- split: test
path: "data/dentistry_test.csv"
- config_name: traditional_chinese_medicine_clinical_medicine
data_files:
- split: train
path: "data/traditional_chinese_medicine_clinical_medicine_dev.csv"
- split: validation
path: "data/traditional_chinese_medicine_clinical_medicine_val.csv"
- split: test
path: "data/traditional_chinese_medicine_clinical_medicine_test.csv"
- config_name: clinical_psychology
data_files:
- split: train
path: "data/clinical_psychology_dev.csv"
- split: validation
path: "data/clinical_psychology_val.csv"
- split: test
path: "data/clinical_psychology_test.csv"
- config_name: technical
data_files:
- split: train
path: "data/technical_dev.csv"
- split: validation
path: "data/technical_val.csv"
- split: test
path: "data/technical_test.csv"
- config_name: culinary_skills
data_files:
- split: train
path: "data/culinary_skills_dev.csv"
- split: validation
path: "data/culinary_skills_val.csv"
- split: test
path: "data/culinary_skills_test.csv"
- config_name: mechanical
data_files:
- split: train
path: "data/mechanical_dev.csv"
- split: validation
path: "data/mechanical_val.csv"
- split: test
path: "data/mechanical_test.csv"
- config_name: logic_reasoning
data_files:
- split: train
path: "data/logic_reasoning_dev.csv"
- split: validation
path: "data/logic_reasoning_val.csv"
- split: test
path: "data/logic_reasoning_test.csv"
- config_name: real_estate
data_files:
- split: train
path: "data/real_estate_dev.csv"
- split: validation
path: "data/real_estate_val.csv"
- split: test
path: "data/real_estate_test.csv"
- config_name: general_principles_of_law
data_files:
- split: train
path: "data/general_principles_of_law_dev.csv"
- split: validation
path: "data/general_principles_of_law_val.csv"
- split: test
path: "data/general_principles_of_law_test.csv"
- config_name: finance_banking
data_files:
- split: train
path: "data/finance_banking_dev.csv"
- split: validation
path: "data/finance_banking_val.csv"
- split: test
path: "data/finance_banking_test.csv"
- config_name: anti_money_laundering
data_files:
- split: train
path: "data/anti_money_laundering_dev.csv"
- split: validation
path: "data/anti_money_laundering_val.csv"
- split: test
path: "data/anti_money_laundering_test.csv"
- config_name: ttqav2
data_files:
- split: train
path: "data/ttqav2_dev.csv"
- split: validation
path: "data/ttqav2_val.csv"
- split: test
path: "data/ttqav2_test.csv"
- config_name: marketing_management
data_files:
- split: train
path: "data/marketing_management_dev.csv"
- split: validation
path: "data/marketing_management_val.csv"
- split: test
path: "data/marketing_management_test.csv"
- config_name: business_management
data_files:
- split: train
path: "data/business_management_dev.csv"
- split: validation
path: "data/business_management_val.csv"
- split: test
path: "data/business_management_test.csv"
- config_name: organic_chemistry
data_files:
- split: train
path: "data/organic_chemistry_dev.csv"
- split: validation
path: "data/organic_chemistry_val.csv"
- split: test
path: "data/organic_chemistry_test.csv"
- config_name: advance_chemistry
data_files:
- split: train
path: "data/advance_chemistry_dev.csv"
- split: validation
path: "data/advance_chemistry_val.csv"
- split: test
path: "data/advance_chemistry_test.csv"
- config_name: physics
data_files:
- split: train
path: "data/physics_dev.csv"
- split: validation
path: "data/physics_val.csv"
- split: test
path: "data/physics_test.csv"
- config_name: secondary_physics
data_files:
- split: train
path: "data/secondary_physics_dev.csv"
- split: validation
path: "data/secondary_physics_val.csv"
- split: test
path: "data/secondary_physics_test.csv"
- config_name: human_behavior
data_files:
- split: train
path: "data/human_behavior_dev.csv"
- split: validation
path: "data/human_behavior_val.csv"
- split: test
path: "data/human_behavior_test.csv"
- config_name: national_protection
data_files:
- split: train
path: "data/national_protection_dev.csv"
- split: validation
path: "data/national_protection_val.csv"
- split: test
path: "data/national_protection_test.csv"
- config_name: jce_humanities
data_files:
- split: train
path: "data/jce_humanities_dev.csv"
- split: validation
path: "data/jce_humanities_val.csv"
- split: test
path: "data/jce_humanities_test.csv"
- config_name: politic_science
data_files:
- split: train
path: "data/politic_science_dev.csv"
- split: validation
path: "data/politic_science_val.csv"
- split: test
path: "data/politic_science_test.csv"
- config_name: agriculture
data_files:
- split: train
path: "data/agriculture_dev.csv"
- split: validation
path: "data/agriculture_val.csv"
- split: test
path: "data/agriculture_test.csv"
- config_name: official_document_management
data_files:
- split: train
path: "data/official_document_management_dev.csv"
- split: validation
path: "data/official_document_management_val.csv"
- split: test
path: "data/official_document_management_test.csv"
- config_name: financial_analysis
data_files:
- split: train
path: "data/financial_analysis_dev.csv"
- split: validation
path: "data/financial_analysis_val.csv"
- split: test
path: "data/financial_analysis_test.csv"
- config_name: pharmacy
data_files:
- split: train
path: "data/pharmacy_dev.csv"
- split: validation
path: "data/pharmacy_val.csv"
- split: test
path: "data/pharmacy_test.csv"
- config_name: educational_psychology
data_files:
- split: train
path: "data/educational_psychology_dev.csv"
- split: validation
path: "data/educational_psychology_val.csv"
- split: test
path: "data/educational_psychology_test.csv"
- config_name: statistics_and_machine_learning
data_files:
- split: train
path: "data/statistics_and_machine_learning_dev.csv"
- split: validation
path: "data/statistics_and_machine_learning_val.csv"
- split: test
path: "data/statistics_and_machine_learning_test.csv"
- config_name: management_accounting
data_files:
- split: train
path: "data/management_accounting_dev.csv"
- split: validation
path: "data/management_accounting_val.csv"
- split: test
path: "data/management_accounting_test.csv"
- config_name: introduction_to_law
data_files:
- split: train
path: "data/introduction_to_law_dev.csv"
- split: validation
path: "data/introduction_to_law_val.csv"
- split: test
path: "data/introduction_to_law_test.csv"
- config_name: computer_science
data_files:
- split: train
path: "data/computer_science_dev.csv"
- split: validation
path: "data/computer_science_val.csv"
- split: test
path: "data/computer_science_test.csv"
- config_name: veterinary_pathology
data_files:
- split: train
path: "data/veterinary_pathology_dev.csv"
- split: validation
path: "data/veterinary_pathology_val.csv"
- split: test
path: "data/veterinary_pathology_test.csv"
- config_name: accounting
data_files:
- split: train
path: "data/accounting_dev.csv"
- split: validation
path: "data/accounting_val.csv"
- split: test
path: "data/accounting_test.csv"
- config_name: fire_science
data_files:
- split: train
path: "data/fire_science_dev.csv"
- split: validation
path: "data/fire_science_val.csv"
- split: test
path: "data/fire_science_test.csv"
- config_name: optometry
data_files:
- split: train
path: "data/optometry_dev.csv"
- split: validation
path: "data/optometry_val.csv"
- split: test
path: "data/optometry_test.csv"
- config_name: insurance_studies
data_files:
- split: train
path: "data/insurance_studies_dev.csv"
- split: validation
path: "data/insurance_studies_val.csv"
- split: test
path: "data/insurance_studies_test.csv"
- config_name: pharmacology
data_files:
- split: train
path: "data/pharmacology_dev.csv"
- split: validation
path: "data/pharmacology_val.csv"
- split: test
path: "data/pharmacology_test.csv"
- config_name: taxation
data_files:
- split: train
path: "data/taxation_dev.csv"
- split: validation
path: "data/taxation_val.csv"
- split: test
path: "data/taxation_test.csv"
- config_name: trust_practice
data_files:
- split: train
path: "data/trust_practice_dev.csv"
- split: validation
path: "data/trust_practice_val.csv"
- split: test
path: "data/trust_practice_test.csv"
- config_name: geography_of_taiwan
data_files:
- split: train
path: "data/geography_of_taiwan_dev.csv"
- split: validation
path: "data/geography_of_taiwan_val.csv"
- split: test
path: "data/geography_of_taiwan_test.csv"
- config_name: physical_education
data_files:
- split: train
path: "data/physical_education_dev.csv"
- split: validation
path: "data/physical_education_val.csv"
- split: test
path: "data/physical_education_test.csv"
- config_name: auditing
data_files:
- split: train
path: "data/auditing_dev.csv"
- split: validation
path: "data/auditing_val.csv"
- split: test
path: "data/auditing_test.csv"
- config_name: administrative_law
data_files:
- split: train
path: "data/administrative_law_dev.csv"
- split: validation
path: "data/administrative_law_val.csv"
- split: test
path: "data/administrative_law_test.csv"
- config_name: education_(profession_level)
data_files:
- split: train
path: "data/education_(profession_level)_dev.csv"
- split: validation
path: "data/education_(profession_level)_val.csv"
- split: test
path: "data/education_(profession_level)_test.csv"
- config_name: economics
data_files:
- split: train
path: "data/economics_dev.csv"
- split: validation
path: "data/economics_val.csv"
- split: test
path: "data/economics_test.csv"
- config_name: veterinary_pharmacology
data_files:
- split: train
path: "data/veterinary_pharmacology_dev.csv"
- split: validation
path: "data/veterinary_pharmacology_val.csv"
- split: test
path: "data/veterinary_pharmacology_test.csv"
- config_name: nautical_science
data_files:
- split: train
path: "data/nautical_science_dev.csv"
- split: validation
path: "data/nautical_science_val.csv"
- split: test
path: "data/nautical_science_test.csv"
- config_name: occupational_therapy_for_psychological_disorders
data_files:
- split: train
path: "data/occupational_therapy_for_psychological_disorders_dev.csv"
- split: validation
path: "data/occupational_therapy_for_psychological_disorders_val.csv"
- split: test
path: "data/occupational_therapy_for_psychological_disorders_test.csv"
- config_name: basic_medical_science
data_files:
- split: train
path: "data/basic_medical_science_dev.csv"
- split: validation
path: "data/basic_medical_science_val.csv"
- split: test
path: "data/basic_medical_science_test.csv"
- config_name: macroeconomics
data_files:
- split: train
path: "data/macroeconomics_dev.csv"
- split: validation
path: "data/macroeconomics_val.csv"
- split: test
path: "data/macroeconomics_test.csv"
- config_name: trade
data_files:
- split: train
path: "data/trade_dev.csv"
- split: validation
path: "data/trade_val.csv"
- split: test
path: "data/trade_test.csv"
- config_name: chinese_language_and_literature
data_files:
- split: train
path: "data/chinese_language_and_literature_dev.csv"
- split: validation
path: "data/chinese_language_and_literature_val.csv"
- split: test
path: "data/chinese_language_and_literature_test.csv"
- config_name: tve_design
data_files:
- split: train
path: "data/tve_design_dev.csv"
- split: validation
path: "data/tve_design_val.csv"
- split: test
path: "data/tve_design_test.csv"
- config_name: junior_science_exam
data_files:
- split: train
path: "data/junior_science_exam_dev.csv"
- split: validation
path: "data/junior_science_exam_val.csv"
- split: test
path: "data/junior_science_exam_test.csv"
- config_name: junior_math_exam
data_files:
- split: train
path: "data/junior_math_exam_dev.csv"
- split: validation
path: "data/junior_math_exam_val.csv"
- split: test
path: "data/junior_math_exam_test.csv"
- config_name: junior_chinese_exam
data_files:
- split: train
path: "data/junior_chinese_exam_dev.csv"
- split: validation
path: "data/junior_chinese_exam_val.csv"
- split: test
path: "data/junior_chinese_exam_test.csv"
- config_name: junior_social_studies
data_files:
- split: train
path: "data/junior_social_studies_dev.csv"
- split: validation
path: "data/junior_social_studies_val.csv"
- split: test
path: "data/junior_social_studies_test.csv"
- config_name: tve_mathematics
data_files:
- split: train
path: "data/tve_mathematics_dev.csv"
- split: validation
path: "data/tve_mathematics_val.csv"
- split: test
path: "data/tve_mathematics_test.csv"
- config_name: tve_chinese_language
data_files:
- split: train
path: "data/tve_chinese_language_dev.csv"
- split: validation
path: "data/tve_chinese_language_val.csv"
- split: test
path: "data/tve_chinese_language_test.csv"
- config_name: tve_natural_sciences
data_files:
- split: train
path: "data/tve_natural_sciences_dev.csv"
- split: validation
path: "data/tve_natural_sciences_val.csv"
- split: test
path: "data/tve_natural_sciences_test.csv"
- config_name: junior_chemistry
data_files:
- split: train
path: "data/junior_chemistry_dev.csv"
- split: validation
path: "data/junior_chemistry_val.csv"
- split: test
path: "data/junior_chemistry_test.csv"
- config_name: music
data_files:
- split: train
path: "data/music_dev.csv"
- split: validation
path: "data/music_val.csv"
- split: test
path: "data/music_test.csv"
- config_name: education
data_files:
- split: train
path: "data/education_dev.csv"
- split: validation
path: "data/education_val.csv"
- split: test
path: "data/education_test.csv"
- config_name: three_principles_of_people
data_files:
- split: train
path: "data/three_principles_of_people_dev.csv"
- split: validation
path: "data/three_principles_of_people_val.csv"
- split: test
path: "data/three_principles_of_people_test.csv"
- config_name: taiwanese_hokkien
data_files:
- split: train
path: "data/taiwanese_hokkien_dev.csv"
- split: validation
path: "data/taiwanese_hokkien_val.csv"
- split: test
path: "data/taiwanese_hokkien_test.csv"
---
# TMMLU+ : Large scale traditional chinese massive multitask language understanding
<p align="center">
<img src="https://huggingface.co/datasets/ikala/tmmluplus/resolve/main/cover.png" alt="A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/" style="max-width: 400" width=400 />
</p>
We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.
The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.
```python
from datasets import load_dataset
task_list = [
'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate',
'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry',
'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management',
'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology',
'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law',
'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders',
'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam',
'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people',
'taiwanese_hokkien'
]
for task in task_list:
val = load_dataset('ikala/tmmluplus', task)['validation']
dev = load_dataset('ikala/tmmluplus', task)['train']
test = load_dataset('ikala/tmmluplus', task)['test']
```
For each dataset split
```python
for row in test:
print(row)
break
>> Dataset({
features: ['question', 'A', 'B', 'C', 'D', 'answer'],
num_rows: 11
})
```
Statistic on all four categories : STEM, Social Science, Humanities, Other
| Category | Test | Dev | Validation |
|----------------------------------|-------|------|------------|
| STEM | 3458 | 70 | 385 |
| Social Sciences | 5958 | 90 | 665 |
| Humanities | 1763 | 35 | 197 |
| Other (Business, Health, Misc.) | 8939 | 135 | 995 |
| **Total** | 20118 | 330 | 2242 |
## Benchmark on direct prompting
| model | STEM | Social Science | Humanities | Other | Average |
|------------|------------|------------|------------|------------|------------|
|Gemini-1.5-pro | 66.18|70.29|61.84|60.30|64.65|
| [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B) | 61.12 | 71.65 | 63.00 | 61.31 |64.27|
| gpt-4-0613 | 60.36 | 67.36 | 56.03 | 57.62 |60.34|
| Qwen-max | 59.92 | 66.95 | 57.43 | 56.48 |60.20|
| [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) | 55.15 | 66.20 | 55.65 | 57.19 |58.55|
| [Qwen/Qwen-14B](https://huggingface.co/Qwen/Qwen-14B) | 46.94 | 56.69 | 49.43 | 48.81 |50.47|
| Gemini-pro | 45.38 | 57.29 | 48.80 | 48.21 |49.92|
| [01-ai/Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 40.24 | 56.77 | 53.99 | 47.58 |49.64|
| Gemini-1.5-flash |53.47|53.42|42.99|46.56|49.11|
| [Reka Flash](https://www.reka.ai/)|45.26|52.91|46.31|43.76|47.06|
| [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 43.86 | 53.29 | 44.78 | 45.13 |46.77|
| [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat)|39.65|52.76|43.90|44.95|45.31|
| [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 39.62 | 50.24 | 44.44 | 44.26 |44.64|
| Claude-1.3 | 42.65 | 49.33 | 42.16 | 44.14 |44.57|
| [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)| 36.46 | 48.38 |45.11 |40.75 | 42.67 |
| gpt-3.5-turbo-0613 | 41.56 | 46.72 | 36.73 | 42.03 |41.76|
| [CausalLM/14B](https://huggingface.co/CausalLM/14B) | 39.83 | 44.50 | 39.61 | 41.97 |41.48|
| [Skywork/Skywork-13B-base](https://huggingface.co/Skywork/Skywork-13B-base) | 36.93 | 47.27 | 41.04 | 40.10 |41.33|
| Claude-3-opus |42.95|45.49|35.79|40.24|41.12|
| [Qwen/Qwen-7B](https://huggingface.co/Qwen/Qwen-7B) | 37.53 | 45.48 | 38.09 | 38.96 |40.01|
| [meta-llama/Llama-3-70b-chat-hf](https://docs.together.ai/docs/inference-models) | 34.44 | 47.02 | 37.50 |39.51 | 39.62 |
| [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 33.32 | 44.64 | 40.27 | 39.89 |39.53|
| [vivo-ai/BlueLM-7B-Base](https://huggingface.co/vivo-ai/BlueLM-7B-Base) | 33.94 | 41.52 | 37.38 | 38.74 |37.90|
| [baichuan-inc/Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | 29.64 | 43.73 | 37.36 | 39.88 |37.65|
| [Qwen/Qwen-1_8B](https://huggingface.co/Qwen/Qwen-1_8B) | 32.65 | 38.95 | 38.34 | 35.27 |36.30|
| Claude-2 | 39.65 | 39.09 | 28.59 | 37.47 |36.20|
| [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) | 31.05 | 39.31 | 35.64 | 35.60 |35.40|
| [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat) | 29.82 | 42.29 | 34.24 | 34.31 |35.17|
| [CausalLM/7B](https://huggingface.co/CausalLM/7B) | 31.03 | 38.17 | 35.87 | 35.39 |35.11|
| [Azure99/blossom-v3_1-mistral-7b](https://huggingface.co/Azure99/blossom-v3_1-mistral-7b) | 32.80 | 36.91 | 32.36 | 34.53 |34.15|
| [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 31.89 | 35.70 | 34.00 | 33.79 | 33.84 |
| [Reka Edge](https://www.reka.ai/)|30.02|39.40|31.84|32.36|33.41|
| [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | 24.69 | 39.18 | 33.60 | 31.99 |32.37|
| [Qwen/Qwen-1_8B-Chat](https://huggingface.co/Qwen/Qwen-1_8B-Chat) | 26.60 | 36.36 | 31.81 | 31.96 |31.68|
| [meta-llama/Llama-3-8b-chat-hf](https://docs.together.ai/docs/inference-models) | 31.52 | 34.19 | 28.91 | 31.79 | 31.60 |
| [TigerResearch/tigerbot-13b-chat-v3](https://huggingface.co/TigerResearch/tigerbot-13b-chat-v3) | 24.73 | 29.63 | 25.72 | 27.22 |26.82|
| [hongyin/mistral-7b-80k](https://huggingface.co/hongyin/mistral-7b-80k) | 24.26 | 23.76 | 22.56 | 24.57 |23.79|
| [deepseek-ai/deepseek-llm-67b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat) | 19.10 | 26.06 | 21.51 | 21.77 |22.11|
| [yentinglin/Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 18.53 | 27.65 | 17.77 | 21.49 |21.36|
| [GeneZC/MiniChat-3B](https://huggingface.co/GeneZC/MiniChat-3B) | 17.66 | 23.35 | 22.71 | 20.34 |21.02|
| [LinkSoul/Chinese-Llama-2-7b](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b) | 16.55 | 18.39 | 12.97 | 16.13 |16.01|
| [yentinglin/Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 14.99 | 16.23 | 15.00 | 16.22 |15.61|
| Claude-instant-1 | 12.52 | 17.13 | 15.10 | 13.57 |14.58|
| [FlagAlpha/Atom-7B](https://huggingface.co/FlagAlpha/Atom-7B) | 5.60 | 13.57 | 7.71 | 11.84 |9.68|
Results via [ievals](https://github.com/iKala/ievals) ( settings : 0-shot direct answering )
# Citation
```
@article{ikala2024improved,
title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han},
journal={arXiv preprint arXiv:2403.01858},
year={2024}
}
```
|
espnet/yodas | espnet | 2024-06-10T02:11:54Z | 176,619 | 110 | [
"license:cc-by-3.0",
"arxiv:2406.00899",
"region:us"
] | [] | 2024-02-10T21:00:10Z | null | ---
license: cc-by-3.0
---
Updates
- 2024/07/09: we also uploaded a new version of YODAS as [YODAS2](https://huggingface.co/datasets/espnet/yodas2), it provides unsegmented audios and higher sampling rate (24k)
## README
This is the YODAS manual/automatic subset from our YODAS dataset, it has 369,510 hours of speech.
This dataset contains audio utterances and corresponding captions (manual or automatic) from YouTube. Note that manual caption only indicates that it is uploaded by users, but not necessarily transcribed by a human
For more details about YODAS dataset, please refer to [our paper](https://arxiv.org/abs/2406.00899)
## Usage:
Considering the extremely large size of the entire dataset, we support two modes of dataset loadings:
**standard mode**: each subset will be downloaded to the local dish before first iterating.
```python
from datasets import load_dataset
# Note this will take very long time to download and preprocess
# you can try small subset for testing purpose
ds = load_dataset('espnet/yodas', 'en000')
print(next(iter(ds['train'])))
```
**streaming mode** most of the files will be streamed instead of downloaded to your local deivce. It can be used to inspect this dataset quickly.
```python
from datasets import load_dataset
# this streaming loading will finish quickly
ds = load_dataset('espnet/yodas', 'en000', streaming=True)
#{'id': '9774', 'utt_id': 'YoRjzEnRcqu-00000-00000716-00000819', 'audio': {'path': None, 'array': array([-0.009552 , -0.01086426, -0.012146 , ..., -0.01992798,
# -0.01885986, -0.01074219]), 'sampling_rate': 16000}, 'text': 'There is a saying'}
print(next(iter(ds['train'])))
```
## Subsets/Shards
There are 149 languages in this dataset, each language is sharded into at least 1 shard to make it easy for our processing and uploading purposes. The raw data of each shard contains 500G at most.
Statistics of each shard can be found in the last section.
We distinguish manual caption subset and automatic caption subset by the first digit in each shard's name. The first digit is 0 if it contains manual captions, 1 if it contains automatic captions.
For example, `en000` to `en005` are the English shards containing manual subsets, and `en100` to `en127` contains the automatic subsets.
## Reference
```
@inproceedings{li2023yodas,
title={Yodas: Youtube-Oriented Dataset for Audio and Speech},
author={Li, Xinjian and Takamichi, Shinnosuke and Saeki, Takaaki and Chen, William and Shiota, Sayaka and Watanabe, Shinji},
booktitle={2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
pages={1--8},
year={2023},
organization={IEEE}
}
```
## Contact
If you have any questions, feel free to contact us at the following email address.
We made sure that our dataset only consisted of videos with CC licenses during our downloading. But in case you find your video unintentionally included in our dataset and would like to delete it, you can send a delete request to the following email.
Remove the parenthesis `()` from the following email address
`(lixinjian)(1217)@gmail.com`
## Statistics
Note that there are no overlappings across different subsets, each audio can be included in the dataset at most once.
| Subset name | Hours |
|------|--------|
|aa000|0.171472|
|ab000|0.358342|
|af000|0.880497|
|ak000|0.250858|
|am000|0.924708|
|ar000|289.707|
|as000|0.548239|
|ay000|0.0342722|
|az000|3.8537|
|ba000|0.0210556|
|be000|48.1537|
|bg000|46.8375|
|bh000|0.0127111|
|bi000|0.0125556|
|bm000|0.00214722|
|bn000|27.064|
|bo000|0.746211|
|br000|0.729914|
|bs000|9.36959|
|ca000|74.1909|
|co000|0.0418639|
|cr000|0.00584167|
|cs000|167.604|
|cy000|5.20017|
|da000|27.4345|
|de000|3063.81|
|de100|4998.11|
|de101|4995.08|
|de102|955.389|
|dz000|0.06365|
|ee000|0.0411722|
|el000|126.75|
|en000|4999.73|
|en001|5032.69|
|en002|5039.9|
|en003|5001.4|
|en004|5054.66|
|en005|4027.02|
|en100|5147.07|
|en101|5123.05|
|en102|5117.68|
|en103|5127.3|
|en104|5126.33|
|en105|5097.65|
|en106|5131.47|
|en107|5135.6|
|en108|5136.84|
|en109|5112.94|
|en110|5109|
|en111|5118.69|
|en112|5122.57|
|en113|5122.31|
|en114|5112.36|
|en115|5112.27|
|en116|5123.77|
|en117|5117.31|
|en118|5117.94|
|en119|5133.05|
|en120|5127.79|
|en121|5129.08|
|en122|5130.22|
|en123|5097.56|
|en124|5116.59|
|en125|5109.76|
|en126|5136.21|
|en127|2404.89|
|eo000|12.6874|
|es000|3737.86|
|es100|5125.25|
|es101|5130.44|
|es102|5145.66|
|es103|5138.26|
|es104|5139.57|
|es105|5138.95|
|es106|2605.26|
|et000|14.4129|
|eu000|19.6356|
|fa000|42.6734|
|ff000|0.0394972|
|fi000|212.899|
|fj000|0.0167806|
|fo000|0.183244|
|fr000|2423.7|
|fr100|5074.93|
|fr101|5057.79|
|fr102|5094.14|
|fr103|3222.95|
|fy000|0.0651667|
|ga000|1.49252|
|gd000|0.01885|
|gl000|9.52575|
|gn000|0.181356|
|gu000|1.99355|
|ha000|0.102931|
|hi000|480.79|
|hi100|2.74865|
|ho000|0.0562194|
|hr000|25.9171|
|ht000|1.07494|
|hu000|181.763|
|hy000|1.64412|
|ia000|0.0856056|
|id000|1420.09|
|id100|4902.79|
|id101|3560.82|
|ie000|0.134603|
|ig000|0.086875|
|ik000|0.00436667|
|is000|5.07075|
|it000|1454.98|
|it100|4989.62|
|it101|4242.87|
|iu000|0.0584278|
|iw000|161.373|
|ja000|1094.18|
|ja100|2929.94|
|jv000|1.08701|
|ka000|26.9727|
|ki000|0.000555556|
|kk000|3.72081|
|kl000|0.00575556|
|km000|3.98273|
|kn000|2.36041|
|ko000|2774.28|
|ko100|5018.29|
|ko101|5048.49|
|ko102|5018.27|
|ko103|2587.85|
|ks000|0.0150444|
|ku000|1.93419|
|ky000|14.3917|
|la000|7.26088|
|lb000|0.1115|
|lg000|0.00386111|
|ln000|0.188739|
|lo000|0.230986|
|lt000|17.6507|
|lv000|2.47671|
|mg000|0.169653|
|mi000|1.10089|
|mk000|5.54236|
|ml000|13.2386|
|mn000|2.0232|
|mr000|7.11602|
|ms000|28.0219|
|my000|2.35663|
|na000|0.0397056|
|nd000|0.00111111|
|ne000|2.34936|
|nl000|413.044|
|nl100|2490.13|
|no000|129.183|
|nv000|0.00319444|
|oc000|0.166108|
|om000|0.148478|
|or000|0.421436|
|pa000|1.58188|
|pl000|757.986|
|ps000|0.9871|
|pt000|1631.44|
|pt100|5044.57|
|pt101|5038.33|
|pt102|5041.59|
|pt103|3553.28|
|qu000|0.748772|
|rm000|0.192933|
|rn000|0.00401111|
|ro000|99.9175|
|ru000|4968.37|
|ru001|627.679|
|ru100|5098.3|
|ru101|5098|
|ru102|5119.43|
|ru103|5107.29|
|ru104|5121.73|
|ru105|5088.05|
|ru106|3393.44|
|rw000|0.640825|
|sa000|0.354139|
|sc000|0.00801111|
|sd000|0.0768722|
|sg000|0.000472222|
|sh000|0.250914|
|si000|4.2634|
|sk000|30.0155|
|sl000|22.9366|
|sm000|0.102333|
|sn000|0.0134722|
|so000|3.36819|
|sq000|3.48276|
|sr000|15.2849|
|st000|0.00324167|
|su000|0.0404639|
|sv000|127.411|
|sw000|1.93409|
|ta000|59.4805|
|te000|5.66794|
|tg000|0.272386|
|th000|497.14|
|th100|1.87429|
|ti000|0.343897|
|tk000|0.0651806|
|tn000|0.112181|
|to000|0.000555556|
|tr000|588.698|
|tr100|4067.68|
|ts000|0.00111111|
|tt000|0.0441194|
|ug000|0.0905|
|uk000|396.598|
|uk100|450.411|
|ur000|22.4373|
|uz000|5.29325|
|ve000|0.00355278|
|vi000|779.854|
|vi100|4963.77|
|vi101|4239.37|
|vo000|0.209436|
|wo000|0.0801528|
|xh000|0.126628|
|yi000|0.0810111|
|yo000|0.322206|
|zh000|299.368|
|zu000|0.139931|
|
LDJnr/Capybara | LDJnr | 2024-06-07T20:15:36Z | 529 | 237 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"Roleplay"
] | [
"conversational",
"question-answering",
"text-generation"
] | 2023-12-16T05:45:37Z | null | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
pretty_name: LessWrong-Amplify-Instruct
size_categories:
- 10K<n<100K
---
## This is the Official Capybara dataset. Over 10,000 multi-turn examples.
Capybara is the culmination of insights derived from synthesis techniques like Evol-instruct (used for WizardLM), Alpaca, Orca, Vicuna, Lamini, FLASK and others.
The single-turn seeds used to initiate the Amplify-Instruct synthesis of conversations are mostly based on datasets that i've personally vetted extensively, and are often highly regarded for their diversity and demonstration of logical robustness and prose, such as Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from different sources, including certain in-house multi-turn datasets like Dove and Verified-Camel(A successor to Puffin).
The multi-turn synthetic conversation generation method is what i'm calling Amplify-Instruct, and the first resulting dataset using this method is called Capybara.
This dataset has a strong focus on information diversity across a wide range of domains, and multi-turn conversations that strongly emphasize reasoning, logic and extrapolation about a wide range of subjects, also many great examples of conversations delving into obscure sub-topics and rabbit holes across pop-culture and STEM, while also maintaining natural prose.
While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing datasets, this is signficant when it comes to scaling implications once I decide to scale the use of Amplify-Instruct to significantly more examples.
- Most tokens contained in this dataset are newly synthesized and did not exist prior online.
- This leverages the Amplify-Instruct method(paper coming soon) to grow thousands of high-quality single-turn seeds into advanced and in-depth multi-turn conversations.
- Average context length per conversation is over 1,000 tokens and 3 turns or more per example (most instruction/chat datasets on HF for fine-tuning are only 1 turn)
- Each conversation is optimized to amplify the natural raw knowledge capabilities of the model, as well as delving deep into obscure and advanced topics.
- Aggresively filtered to remove any and all possible examples of overt moralizing/alignment, and common undesirable behaviours such as "as an AI language model" and "September 2021" and "I don't have personal beliefs"
## Benchmarks.
- Resulting benchmarks are available on HF Leaderboard, and other benchmarks done as well such as AGIEval, Bigbench and GPT4All.
- (The only Capybara model available on all of these benchmarks including HF leaderboard is Capybara V1, trained on Llama-2)
- The below benchmarks are compared against fine-tunes also done on Llama-2.


## Quality filtering and cleaning.
- Extensive measures were done to filter out any conversations that contained even a single instance of overt AI moralizing/alignment, such as "As an AI language model" and common undesirable behaviours such as conversations that include "September 2021" and "I don't have personal beliefs" and other phrases I've found to be highly correlated with undesirable responses and conversation paths.
## Thank you to those of you that have indirectly contributed!
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds, which were used to generate the multi-turn data.
The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project, however, most of the tokens in capybara within those given sections are novel tokens not present in any of the seed datasets.
Datasets in Blue are in-house curations that previously existed prior to Capybara, and were now used as seeds for Capybara.

## Dataset contamination.
We have checked the capybara dataset for contamination for several of the most popular benchmarks and can confirm that there is no contaminaton found besides MT-bench which is now cleaned out.
We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.
The following are benchmarks we checked for contamination against our dataset:
- HumanEval
- AGIEval
- TruthfulQA
- MMLU
- GPT4All
*Newly cleaned out as of 12/15/2023 - MT-bench
## Credits:
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.
Luckily there is folks over at Nous Research that helped with expediting these processes, big thank you to J-Supha specifically for making these types of significant contributions.
## Example Outputs from the Llama-2 7B model trained on this dataset:



## Future Plans & How you can help
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
Citation:
```
@article{daniele2023amplify-instruct,
title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for efficient LLM Training.},
author={Daniele, Luigi and Suphavadeeprasit},
journal={arXiv preprint arXiv:(coming soon)},
url={https://huggingface.co/datasets/LDJnr/Capybara},
year={2023}
}
``` |
Lin-Chen/ShareGPT4V | Lin-Chen | 2024-06-06T13:52:04Z | 1,098 | 285 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2311.12793",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | 2023-11-20T09:29:12Z | null | ---
license: cc-by-nc-4.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: ShareGPT4V Captions 1.2M Dataset Card
size_categories:
- 1M<n
configs:
- config_name: ShareGPT4V
data_files: sharegpt4v_instruct_gpt4-vision_cap100k.json
- config_name: ShareGPT4V-PT
data_files: share-captioner_coco_lcs_sam_1246k_1107.json
---
# News
**[2024/5/8]** We released **[ShareGPT4Video](https://sharegpt4video.github.io/)**, a large-scale video-caption dataset, with **40K** captions annotated by GPT4V and **4.8M** captions annotated by our ShareCaptioner-Video. The total videos last with **300** hours and **3000** hours separately!
# ShareGPT4V 1.2M Dataset Card
## Dataset details
**Dataset type:**
ShareGPT4V Captions 1.2M is a set of GPT4-Vision-powered multi-modal captions data.
It is constructed to enhance modality alignment and fine-grained visual concept perception in Large Multi-Modal Models (LMMs) during both the pre-training and supervised fine-tuning stages. This advancement aims to bring LMMs towards GPT4-Vision capabilities.
* sharegpt4v_instruct_gpt4-vision_cap100k.json is generated by GPT4-Vision (ShareGPT4V).
* share-captioner_coco_lcs_sam_1246k_1107.json is generated by our Share-Captioner trained on GPT4-Vision-generated data (ShareGPT4V-PT).
* sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.json is curated from sharegpt4v_instruct_gpt4-vision_cap100k.json for the supervised fine-tuning stage.
**Dataset date:**
ShareGPT4V Captions 1.2M was collected in 11.07 2023.
**Paper or resources for more information:**
[[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/ShareGPT4Omni/ShareGPT4V)]
**License:**
Attribution-NonCommercial 4.0 International
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
## Intended use
**Primary intended uses:**
The primary use of ShareGPT4V Captions 1.2M is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of this dataset are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
osunlp/Multimodal-Mind2Web | osunlp | 2024-06-05T05:12:21Z | 3,318 | 68 | [
"language:en",
"license:openrail",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2401.01614",
"region:us",
"web agent",
"multimodal"
] | [] | 2024-03-18T03:32:11Z | 2 | ---
license: openrail
language:
- en
tags:
- web agent
- multimodal
dataset_info:
features:
- name: action_uid
dtype: string
- name: raw_html
dtype: string
- name: cleaned_html
dtype: string
- name: operation
dtype: string
- name: pos_candidates
sequence: string
- name: neg_candidates
sequence: string
- name: website
dtype: string
- name: domain
dtype: string
- name: subdomain
dtype: string
- name: annotation_id
dtype: string
- name: confirmed_task
dtype: string
- name: screenshot
dtype: image
- name: action_reprs
sequence: string
- name: target_action_index
dtype: string
- name: target_action_reprs
dtype: string
splits:
- name: test_website
num_bytes: 1589513606.713
num_examples: 1019
- name: test_task
num_bytes: 2004628575.972
num_examples: 1339
- name: test_domain
num_bytes: 5128899015.440001
num_examples: 4060
- name: train
num_bytes: 13439470200.25
num_examples: 7775
download_size: 4014045168
dataset_size: 22162511398.375
---
## Dataset Description
- **Homepage:** https://osu-nlp-group.github.io/SeeAct/
- **Repository:** https://github.com/OSU-NLP-Group/SeeAct
- **Paper:** https://arxiv.org/abs/2401.01614
- **Point of Contact:** [Boyuan Zheng](mailto:[email protected])
### Dataset Summary
Multimodal-Mind2Web is the multimodal version of [Mind2Web](https://osu-nlp-group.github.io/Mind2Web/), a dataset for developing and evaluating generalist agents
for the web that can follow language instructions to complete complex tasks on any website. In this dataset, we align each HTML document in the dataset with
its corresponding webpage screenshot image from the Mind2Web raw dump. This multimodal version addresses the inconvenience of loading images from the ~300GB Mind2Web Raw Dump.
## Dataset Structure
### Data Splits
- train: 7775 actions from 1009 tasks.
- test_task: 1339 actions from 177 tasks. Tasks from the same website are seen during training.
- test_website: 1019 actions from 142 tasks. Websites are not seen during training.
- test_domain: 4060 actions from 694 tasks. Entire domains are not seen during training.
The **_train_** set may include some screenshot images not properly rendered caused by rendering issues during Mind2Web annotation. The three **_test splits (test_task, test_website, test_domain)_** have undergone human verification to confirm element visibility and correct rendering for action prediction.
### Data Fields
Each line in the dataset is an action consisting of screenshot image, HTML text and other fields required for action prediction, for the convenience of inference.
- "annotation_id" (str): unique id for each task
- "website" (str): website name
- "domain" (str): website domain
- "subdomain" (str): website subdomain
- "confirmed_task" (str): task description
- **"screenshot" (str): path to the webpage screenshot image corresponding to the HTML.**
- "action_uid" (str): unique id for each action (step)
- "raw_html" (str): raw html of the page before the action is performed
- "cleaned_html" (str): cleaned html of the page before the action is performed
- "operation" (dict): operation to perform
- "op" (str): operation type, one of CLICK, TYPE, SELECT
- "original_op" (str): original operation type, contain additional HOVER and ENTER that are mapped to CLICK, not used
- "value" (str): optional value for the operation, e.g., text to type, option to select
- "pos_candidates" (list[dict]): ground truth elements. Here we only include positive elements that exist in "cleaned_html" after our preprocessing, so "pos_candidates" might be empty. The original labeled element can always be found in the "raw_html".
- "tag" (str): tag of the element
- "is_original_target" (bool): whether the element is the original target labeled by the annotator
- "is_top_level_target" (bool): whether the element is a top level target find by our algorithm. please see the paper for more details.
- "backend_node_id" (str): unique id for the element
- "attributes" (str): serialized attributes of the element, use `json.loads` to convert back to dict
- "neg_candidates" (list[dict]): other candidate elements in the page after preprocessing, has similar structure as "pos_candidates"
- "action_reprs" (list[str]): human readable string representation of the action sequence
- "target_action_index" (str): the index of the target action in the action sequence
- "target_action_reprs" (str): human readable string representation of the target action
### Disclaimer
This dataset was collected and released solely for research purposes, with the goal of making the web more accessible via language technologies. The authors are strongly against any potential harmful use of the data or technology to any party.
### Citation Information
```
@article{zheng2024seeact,
title={GPT-4V(ision) is a Generalist Web Agent, if Grounded},
author={Boyuan Zheng and Boyu Gou and Jihyung Kil and Huan Sun and Yu Su},
booktitle={Forty-first International Conference on Machine Learning},
year={2024},
url={https://openreview.net/forum?id=piecKJ2DlB},
}
@inproceedings{deng2023mindweb,
title={Mind2Web: Towards a Generalist Agent for the Web},
author={Xiang Deng and Yu Gu and Boyuan Zheng and Shijie Chen and Samuel Stevens and Boshi Wang and Huan Sun and Yu Su},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=kiYqbO3wqw}
}
``` |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.