datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
open-llm-leaderboard-old/requests | open-llm-leaderboard-old | 2024-06-19T21:36:08Z | 22,162 | 22 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [] | 2023-06-19T15:15:07Z | null | ---
license: apache-2.0
---

# Open LLM Leaderboard Requests
This repository contains the request files of models that have been submitted to the Open LLM Leaderboard.
You can take a look at the current status of your model by finding its request file in this dataset. If your model failed, feel free to open an issue on the Open LLM Leaderboard! (We don't follow issues in this repository as often)
## Evaluation Methodology
The evaluation process involves running your models against several benchmarks from the Eleuther AI Harness, a unified framework for measuring the effectiveness of generative language models. Below is a brief overview of each benchmark:
1. AI2 Reasoning Challenge (ARC) - Grade-School Science Questions (25-shot)
2. HellaSwag - Commonsense Inference (10-shot)
3. MMLU - Massive Multi-Task Language Understanding, knowledge on 57 domains (5-shot)
4. TruthfulQA - Propensity to Produce Falsehoods (0-shot)
5. Winogrande - Adversarial Winograd Schema Challenge (5-shot)
6. GSM8k - Grade School Math Word Problems Solving Complex Mathematical Reasoning (5-shot)
Together, these benchmarks provide an assessment of a model's capabilities in terms of knowledge, reasoning, and some math, in various scenarios.
## Accessing Your Results
To view the numerical results of your evaluated models, visit the dedicated Hugging Face Dataset at https://huggingface.co/datasets/open-llm-leaderboard/results. This dataset offers a thorough breakdown of each model's performance on the individual benchmarks.
## Exploring Model Details
For further insights into the inputs and outputs of specific models, locate the "📄" emoji associated with the desired model within this repository. Clicking on this icon will direct you to the respective GitHub page containing detailed information about the model's behavior during the evaluation process.
|
toxigen/toxigen-data | toxigen | 2024-06-17T16:35:29Z | 4,574 | 55 | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.09509",
"region:us"
] | [
"text-classification"
] | 2022-05-01T15:49:02Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
pretty_name: ToxiGen
dataset_info:
- config_name: annotated
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
splits:
- name: test
num_bytes: 364518
num_examples: 940
- name: train
num_bytes: 3238381
num_examples: 8960
download_size: 768996
dataset_size: 3602899
- config_name: annotations
features:
- name: Input.prompt
dtype: string
- name: Input.text
dtype: string
- name: Input.time
dtype: string
- name: Input.generation_method
dtype: string
- name: Input.prompt_label
dtype: string
- name: Input.target_group
dtype: string
- name: Input.binary_prompt_label
dtype: int64
- name: Answer.annotatorAge
dtype: string
- name: Answer.annotatorGender
dtype: string
- name: Answer.annotatorMinority
dtype: string
- name: Answer.annotatorPolitics.1
dtype: bool
- name: Answer.annotatorPolitics.2
dtype: bool
- name: Answer.annotatorPolitics.3
dtype: bool
- name: Answer.annotatorPolitics.4
dtype: bool
- name: Answer.annotatorPolitics.5
dtype: bool
- name: Answer.annotatorRace
dtype: string
- name: Answer.factSelect
dtype: string
- name: Answer.framingQ
dtype: string
- name: Answer.inGroup.on
dtype: bool
- name: Answer.ingroup.1
dtype: bool
- name: Answer.ingroup.2
dtype: bool
- name: Answer.ingroup.3
dtype: bool
- name: Answer.intent.1
dtype: bool
- name: Answer.intent.2
dtype: bool
- name: Answer.intent.3
dtype: bool
- name: Answer.intent.4
dtype: bool
- name: Answer.intent.5
dtype: bool
- name: Answer.lewd.1
dtype: bool
- name: Answer.lewd.2
dtype: bool
- name: Answer.lewd.3
dtype: bool
- name: Answer.refTarget
dtype: string
- name: Answer.stateFrame
dtype: string
- name: Answer.stateGroup
dtype: string
- name: Answer.stereo.1
dtype: bool
- name: Answer.stereo.2
dtype: bool
- name: Answer.stereo.3
dtype: bool
- name: Answer.toAI.1
dtype: bool
- name: Answer.toAI.2
dtype: bool
- name: Answer.toAI.3
dtype: bool
- name: Answer.toAI.4
dtype: bool
- name: Answer.toAI.5
dtype: bool
- name: Answer.toPER.1
dtype: bool
- name: Answer.toPER.2
dtype: bool
- name: Answer.toPER.3
dtype: bool
- name: Answer.toPER.4
dtype: bool
- name: Answer.toPER.5
dtype: bool
- name: Answer.writer.1
dtype: bool
- name: Answer.writer.2
dtype: bool
- name: HashedWorkerId
dtype: int64
splits:
- name: train
num_bytes: 21933185
num_examples: 27450
download_size: 3350653
dataset_size: 21933185
- config_name: prompts
features:
- name: text
dtype: string
splits:
- name: hate_trans_1k
num_bytes: 585554
num_examples: 1000
- name: neutral_black_1k
num_bytes: 857769
num_examples: 1000
- name: hate_native_american_1k
num_bytes: 480000
num_examples: 1000
- name: neutral_immigrant_1k
num_bytes: 342243
num_examples: 1000
- name: hate_middle_east_1k
num_bytes: 426551
num_examples: 1000
- name: neutral_lgbtq_1k
num_bytes: 914319
num_examples: 1000
- name: neutral_women_1k
num_bytes: 394963
num_examples: 1000
- name: neutral_chinese_1k
num_bytes: 412062
num_examples: 1000
- name: hate_latino_1k
num_bytes: 708000
num_examples: 1000
- name: hate_bisexual_1k
num_bytes: 447794
num_examples: 1000
- name: hate_mexican_1k
num_bytes: 675444
num_examples: 1000
- name: hate_asian_1k
num_bytes: 503093
num_examples: 1000
- name: neutral_mental_disability_1k
num_bytes: 556905
num_examples: 1000
- name: neutral_mexican_1k
num_bytes: 483603
num_examples: 1000
- name: hate_mental_disability_1k
num_bytes: 480620
num_examples: 1000
- name: neutral_bisexual_1k
num_bytes: 915612
num_examples: 1000
- name: neutral_latino_1k
num_bytes: 470000
num_examples: 1000
- name: hate_chinese_1k
num_bytes: 384934
num_examples: 1000
- name: neutral_jewish_1k
num_bytes: 649674
num_examples: 1000
- name: hate_muslim_1k
num_bytes: 425760
num_examples: 1000
- name: neutral_asian_1k
num_bytes: 615895
num_examples: 1000
- name: hate_physical_disability_1k
num_bytes: 413643
num_examples: 1000
- name: hate_jewish_1k
num_bytes: 573538
num_examples: 1000
- name: neutral_muslim_1k
num_bytes: 491659
num_examples: 1000
- name: hate_immigrant_1k
num_bytes: 285309
num_examples: 1000
- name: hate_black_1k
num_bytes: 745295
num_examples: 1000
- name: hate_lgbtq_1k
num_bytes: 577075
num_examples: 1000
- name: hate_women_1k
num_bytes: 389583
num_examples: 1000
- name: neutral_middle_east_1k
num_bytes: 415319
num_examples: 1000
- name: neutral_native_american_1k
num_bytes: 586993
num_examples: 1000
- name: neutral_physical_disability_1k
num_bytes: 458497
num_examples: 1000
download_size: 1698170
dataset_size: 16667706
- config_name: train
features:
- name: prompt
dtype: string
- name: generation
dtype: string
- name: generation_method
dtype: string
- name: group
dtype: string
- name: prompt_label
dtype: int64
- name: roberta_prediction
dtype: float64
splits:
- name: train
num_bytes: 169400442
num_examples: 250951
download_size: 18784380
dataset_size: 169400442
configs:
- config_name: annotated
default: true
data_files:
- split: test
path: annotated/test-*
- split: train
path: annotated/train-*
- config_name: annotations
data_files:
- split: train
path: annotations/train-*
- config_name: prompts
data_files:
- split: hate_trans_1k
path: prompts/hate_trans_1k-*
- split: neutral_black_1k
path: prompts/neutral_black_1k-*
- split: hate_native_american_1k
path: prompts/hate_native_american_1k-*
- split: neutral_immigrant_1k
path: prompts/neutral_immigrant_1k-*
- split: hate_middle_east_1k
path: prompts/hate_middle_east_1k-*
- split: neutral_lgbtq_1k
path: prompts/neutral_lgbtq_1k-*
- split: neutral_women_1k
path: prompts/neutral_women_1k-*
- split: neutral_chinese_1k
path: prompts/neutral_chinese_1k-*
- split: hate_latino_1k
path: prompts/hate_latino_1k-*
- split: hate_bisexual_1k
path: prompts/hate_bisexual_1k-*
- split: hate_mexican_1k
path: prompts/hate_mexican_1k-*
- split: hate_asian_1k
path: prompts/hate_asian_1k-*
- split: neutral_mental_disability_1k
path: prompts/neutral_mental_disability_1k-*
- split: neutral_mexican_1k
path: prompts/neutral_mexican_1k-*
- split: hate_mental_disability_1k
path: prompts/hate_mental_disability_1k-*
- split: neutral_bisexual_1k
path: prompts/neutral_bisexual_1k-*
- split: neutral_latino_1k
path: prompts/neutral_latino_1k-*
- split: hate_chinese_1k
path: prompts/hate_chinese_1k-*
- split: neutral_jewish_1k
path: prompts/neutral_jewish_1k-*
- split: hate_muslim_1k
path: prompts/hate_muslim_1k-*
- split: neutral_asian_1k
path: prompts/neutral_asian_1k-*
- split: hate_physical_disability_1k
path: prompts/hate_physical_disability_1k-*
- split: hate_jewish_1k
path: prompts/hate_jewish_1k-*
- split: neutral_muslim_1k
path: prompts/neutral_muslim_1k-*
- split: hate_immigrant_1k
path: prompts/hate_immigrant_1k-*
- split: hate_black_1k
path: prompts/hate_black_1k-*
- split: hate_lgbtq_1k
path: prompts/hate_lgbtq_1k-*
- split: hate_women_1k
path: prompts/hate_women_1k-*
- split: neutral_middle_east_1k
path: prompts/neutral_middle_east_1k-*
- split: neutral_native_american_1k
path: prompts/neutral_native_american_1k-*
- split: neutral_physical_disability_1k
path: prompts/neutral_physical_disability_1k-*
- config_name: train
data_files:
- split: train
path: train/train-*
---
# Dataset Card for ToxiGen
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-instances)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Sign up for Data Access
To access ToxiGen, first fill out [this form](https://forms.office.com/r/r6VXX8f8vh).
## Dataset Description
- **Repository:** https://github.com/microsoft/toxigen
- **Paper:** https://arxiv.org/abs/2203.09509
- **Point of Contact #1:** [Tom Hartvigsen]([email protected])
- **Point of Contact #2:** [Saadia Gabriel]([email protected])
### Dataset Summary
This dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in [our paper](https://arxiv.org/abs/2203.09509).
### Languages
All text is written in English.
## Dataset Structure
### Data Fields
We release TOXIGEN as a dataframe with the following fields:
- **prompt** is the prompt used for **generation**.
- **generation** is the TOXIGEN generated text.
- **generation_method** denotes whether or not ALICE was used to generate the corresponding generation. If this value is ALICE, then ALICE was used, if it is TopK, then ALICE was not used.
- **prompt_label** is the binary value indicating whether or not the prompt is toxic (1 is toxic, 0 is benign).
- **group** indicates the target group of the prompt.
- **roberta_prediction** is the probability predicted by our corresponding RoBERTa model for each instance.
### Citation Information
```bibtex
@inproceedings{hartvigsen2022toxigen,
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022}
}
```
|
togethercomputer/RedPajama-Data-1T | togethercomputer | 2024-06-17T11:36:03Z | 1,926 | 1,084 | [
"task_categories:text-generation",
"language:en",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | 2023-04-17T06:28:35Z | null | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T
---
### Getting Started
The dataset consists of 2084 jsonl files.
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
```
Or you can directly download the files using the following command:
```
wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
while read line; do
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
mkdir -p $(dirname $dload_loc)
wget "$line" -O "$dload_loc"
done < urls.txt
```
After downloading the files, you can load the dataset from disk by setting the `RED_PAJAMA_DATA_DIR` environment variable to the directory containing the files:
```python
import os
from datasets import load_dataset
os.environ["RED_PAJAMA_DATA_DIR"] = "/path/to/download"
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
```
A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```json
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
"red_pajama_subset": "common_crawl" | "c4" | "github" | "arxiv" | "wikipedia" | "stackexchange"
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> The 'book' config is defunct and no longer accessible due to reported copyright infringement for the Book3 dataset contained in this config.</p>
</div>
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
### SHA256 Checksums
SHA256 checksums for the dataset files for each data source are available here:
```
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/arxiv_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/c4_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/common_crawl_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/github_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/stackexchange_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/wikipedia_SHA256SUMS.txt
```
To cite RedPajama, please use:
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
### License
Please refer to the licenses of the data subsets you use.
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
* GitHub was limited to MIT, BSD, or Apache licenses only
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
starmpcc/Asclepius-Synthetic-Clinical-Notes | starmpcc | 2024-06-14T04:16:04Z | 466 | 86 | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2309.00237",
"region:us",
"medical",
"synthetic"
] | [
"question-answering",
"summarization",
"text-generation"
] | 2023-09-01T01:47:59Z | null | ---
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- en
tags:
- medical
- synthetic
pretty_name: 'Asclepius: Synthetic Clincal Notes & Instruction Dataset'
size_categories:
- 100K<n<1M
---
# Asclepius: Synthetic Clincal Notes & Instruction Dataset
## Dataset Description
- **Repository:** [Github](https://github.com/starmpcc/Asclepius)
- **Paper:** https://arxiv.org/abs/2309.00237
### Dataset Summary
This dataset is official dataset for Asclepius [(arxiv)](https://arxiv.org/abs/2309.00237)
This dataset is composed with Clinical Note - Question - Answer format to build a clinical LLMs.
- We first synthesized synthetic notes from [PMC-Patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients) case reports with GPT-3.5
- Then, we generate instruction-answer pairs for 157k synthetic discharge summaries
### Supported Tasks
- This dataset covers below 8 tasks
- Named Entity Recognition
- Abbreviation Expansion
- Relation Extraction
- Temporal Information Extraction
- Coreference Resolution
- Paraphrasing
- Summarization
- Question Answering
### Languages
English
## Dataset Structure
### Data Instances
- `synthetic.csv`
- Clinical Note - Question - Answer pairs
### Data Fields
- `patient_id`: Unique case report id from PMC-Patients
- `patient`: Case report text
- `question`: GPT-3.5 generated instruction from patient. The used prompt can be checked on github.
- `answer`: GPT-3.5 generated answer for given case report and question
- `task`: Corresponding category of question. One of above listsed
## Dataset Creation
### Source Data
[PMC-Patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients)
### Annotations
We used GPT-3.5-turbo (version 0314).
You can check the prompts on our github.
## Additional Information
### Models
- [Asclepius-7B](https://huggingface.co/starmpcc/Asclepius-7B)
- [Asclepius-13B](https://huggingface.co/starmpcc/Asclepius-13B)
- [Asclepius-Llama2-7B](https://huggingface.co/starmpcc/Asclepius-Llama2-7B)
- [Asclepius-Llama2-13B](https://huggingface.co/starmpcc/Asclepius-Llama2-13B)
- [Asclepius-Llama3-8B](https://huggingface.co/starmpcc/Asclepius-Llama3-8B)
- [Asclepius-Mistral-7B-v0.3](https://huggingface.co/starmpcc/Asclepius-Mistral-7B-v0.3)
### Variants
- The instruction-answer pairs generated from MIMIC-III discharge summaries and the models trained with them are now available on [Physionet](https://physionet.org/content/asclepius-r/1.0.0/)!
### Licensing Information
CC-BY-NC-SA 4.0
### Citation Information
```
@misc{kweon2023publicly,
title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes},
author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi},
year={2023},
eprint={2309.00237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
allenai/tulu-v2-sft-mixture | allenai | 2024-05-24T21:29:24Z | 521 | 123 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.18290",
"region:us"
] | [
"question-answering",
"conversational",
"text-generation"
] | 2023-11-13T21:56:34Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1239293363
num_examples: 326154
download_size: 554561769
dataset_size: 1239293363
license: odc-by
task_categories:
- question-answering
- conversational
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Dataset Card for Tulu V2 Mix
*Note the [ODC-BY license](https://opendatacommons.org/licenses/by/1-0/), indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact.*
Tulu is a series of language models that are trained to act as helpful assistants.
The dataset consists of a mix of :
* [FLAN](https://github.com/google-research/FLAN/tree/main) (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000 examples from the CoT
subset of the FLAN v2 mixture.
* [Open Assistant 1](https://huggingface.co/datasets/OpenAssistant/oasst1) (Apache 2.0): We isolate the highest-scoring paths in each conversation tree and use these samples, resulting in 7,708 examples.
* [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) (Apache 2.0 listed, no official repo found): We use all 114,046 from our processed ShareGPT dataset, as we found ShareGPT gave strong performance in prior work.
* [GPT4-Alpaca](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release) (CC By NC 4.0):We sample 20,000 samples from GPT-4 Alpaca to further include distilled GPT-4 data.
* [Code-Alpaca](https://github.com/sahil280114/codealpaca) (CC By NC 4.0):We use all 20,022 examples from Code Alpaca, following our prior V1 mixture, in order to improve model code abilities.
* [LIMA](https://huggingface.co/datasets/GAIR/lima) (CC BY-NC-SA): We use 1,030 examples from LIMA as an example of carefully curated data.
* [WizardLM Evol Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) (No license provided): We subsample 30,000 examples from WizardLM, which contains distilled data of increasing diversity and complexity.
* [Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca) (MIT): We sample 30,000 samples generated by GPT-4 from OpenOrca, a reproduction of Orca Mukherjee et al., 2023, which augments FLAN data with additional model-generated explanations
* Hardcoded: A collection of prompts such as `Tell me about yourself' such that the model generates correct outputs given inquiries about its name or developers. We wrote 14 samples and repeat each sample 10 times in the mixture, resulting in 140 total samples.
* Science: 7,544 examples from a mixture of scientific document understand tasks—including question answering, fact-checking, summarization, and information extraction (under development, standalone release soon).
These are made by taking either just the training set of the subsets or the entire section if no splits are present.
Tulu V2 is presented as a singular training split.
[Tulu V2 DPO 70B](https://huggingface.co/allenai/tulu-2-dpo-70b), and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
**Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
The length distribution of the dataset can be seen below:
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/length_histogram_v2.png" alt="TuluV2 histogram" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Tulu V1 Mix can be found [here](https://huggingface.co/datasets/allenai/tulu-v1).
**Note**: Some samples contain empty turns as noted in [this github issue](https://github.com/allenai/open-instruct/issues/161).
We will not remove these from this release to ensure reproducibility but you may wish to explicitly filter them out when training your own models!
### Science data note
The included science data is from the following categories:
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/science_data.png" alt="TuluV2 science data mix" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Note that some of the examples include an off-by-one error in the sentence indexing that had a small or negligible impact on performance.
This was found during testing and will be updated in future versions, with the detailed release of the dataset artifact itself coming in a future release.
### License
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
|
ipipan/polqa | ipipan | 2024-05-24T15:35:50Z | 214 | 10 | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:open-domain-qa",
"task_ids:document-retrieval",
"task_ids:abstractive-qa",
"annotations_creators:expert-generated",
"language:pl",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"arxiv:2212.08897",
"region:us"
] | [
"question-answering",
"text-retrieval",
"text2text-generation"
] | 2022-12-17T15:03:58Z | 1 | ---
task_categories:
- question-answering
- text-retrieval
- text2text-generation
task_ids:
- open-domain-qa
- document-retrieval
- abstractive-qa
language:
- pl
pretty_name: PolQA
size_categories:
- 10K<n<100K
annotations_creators:
- expert-generated
license: cc-by-sa-4.0
---
# Dataset Card for PolQA Dataset
## Dataset Description
- **Paper:** [Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies](https://arxiv.org/abs/2212.08897)
- **Point of Contact:** [Piotr Rybak](mailto:[email protected])
### Dataset Summary
PolQA is the first Polish dataset for open-domain question answering. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7 million candidate passages. The dataset can be used to train both a passage retriever and an abstractive reader.
### Supported Tasks and Leaderboards
- `open-domain-qa`: The dataset can be used to train a model for open-domain question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
- `document-retrieval`: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by [top-k retrieval accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) or [NDCG](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ndcg_score.html).
- `abstractive-qa`: The dataset can be used to train a model for abstractive question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
### Languages
The text is in Polish, as spoken by the host of the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show (questions) and [Polish Wikipedia](https://pl.wikipedia.org/) editors (passages). The BCP-47 code for Polish is pl-PL.
## Dataset Structure
### Data Instances
The main part of the dataset consists of manually annotated question-passage pairs. For each instance, there is a `question`, a passage (`passage_id`, `passage_title`, `passage_text`), and a boolean indicator if the passage is `relevant` for the given question (i.e. does it contain the answers).
For each `question` there is a list of possible `answers` formulated in a natural language, in a way a Polish
speaker would answer the questions. It means that the answers might
contain prepositions, be inflected, and contain punctuation. In some
cases, the answer might have multiple correct variants, e.g. numbers
are written as numerals and words, synonyms, abbreviations and their
expansions.
Additionally, we provide a classification of each question-answer pair based on the `question_formulation`, the `question_type`, and the `entity_type/entity_subtype`, according to the taxonomy proposed by
[Maciej Ogrodniczuk and Piotr Przybyła (2021)](http://nlp.ipipan.waw.pl/Bib/ogr:prz:21:poleval.pdf).
```
{
'question_id': 6,
'passage_title': 'Mumbaj',
'passage_text': 'Mumbaj lub Bombaj (marathi मुंबई, trb.: Mumbaj; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim.',
'passage_wiki': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.',
'passage_id': '42609-0',
'duplicate': False,
'question': 'W którym państwie leży Bombaj?',
'relevant': True,
'annotated_by': 'Igor',
'answers': "['w Indiach', 'Indie']",
'question_formulation': 'QUESTION',
'question_type': 'SINGLE ENTITY',
'entity_type': 'NAMED',
'entity_subtype': 'COUNTRY',
'split': 'train',
'passage_source': 'human'
}
```
The second part of the dataset is a corpus of Polish Wikipedia (March 2022 snapshot) passages. The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
```
{
'id': '42609-0',
'title': 'Mumbaj',
'text': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.'
}
```
### Data Fields
Question-passage pairs:
- `question_id`: an integer id of the question
- `passage_title`: a string containing the title of the Wikipedia article
- `passage_text`: a string containing the passage text as extracted by the human annotator
- `passage_wiki`: a string containing the passage text as it can be found in the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
- `passage_id`: a string containing the id of the passage from the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
- `duplicate`: a boolean flag representing whether a question-passage pair is duplicated in the dataset. This occurs when the same passage was found in multiple passage sources.
- `question`: a string containing the question
- `relevant`: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
- `annotated_by`: a string containing the name of the annotator who verified the relevance of the pair
- `answers`: a string containing a list of possible short answers to the question
- `question_formulation`: a string containing a kind of expression used to request information. One of the following:
- `QUESTION`, e.g. *What is the name of the first letter of the Greek alphabet?*
- `COMMAND`, e.g. *Expand the abbreviation ’CIA’.*
- `COMPOUND`, e.g. *This French writer, born in the 19th century, is
considered a pioneer of sci-fi literature. What is his name?*
- `question_type`: a string indicating what type of information is sought by the question. One of the following:
- `SINGLE ENTITY`, e.g. *Who is the hero in the Tomb Rider video game series?*
- `MULTIPLE ENTITIES`, e.g. *Which two seas are linked by the Corinth Canal?*
- `ENTITY CHOICE`, e.g. *Is "Sombrero" a type of dance, a hat, or a dish?*
- `YES/NO`, e.g. *When the term of office of the Polish Sejm is terminated, does it apply to the Senate as well?*
- `OTHER NAME`, e.g. *What was the nickname of Louis I, the King of the Franks?*
- `GAP FILLING`, e.g. *Finish the proverb: "If you fly with the crows... ".*
- `entity_type`: a string containing a type of the sought entity. One of the following: `NAMED`, `UNNAMED`, or `YES/NO`.
- `entity_subtype`: a string containing a subtype of the sought entity. Can take one of the 34 different values.
- `split`: a string containing the split of the dataset. One of the following: `train`, `valid`, or `test`.
- `passage_source`: a string containing the source of the passage. One of the following:
- `human`: the passage was proposed by a human annotator using any
internal (i.e. Wikipedia search) or external (e.g. Google) search engines and any keywords or queries they considered useful
- `hard-negatives`: the passage was proposed using a neural retriever trained on the passages found by the human annotators
- `zero-shot`: the passage was proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2)
Corpus of passages:
- `id`: a string representing the Wikipedia article id and the index of extracted passage. Matches the `passage_id` from the main part of the dataset.
- `title`: a string containing the title of the Wikipedia article. Matches the `passage_title` from the main part of the dataset.
- `text`: a string containing the passage text. Matches the `passage_wiki` from the main part of the dataset.
### Data Splits
The questions are assigned into one of three splits: `train`, `validation`, and `test`. The `validation` and `test` questions are randomly sampled from the `test-B` dataset from the [PolEval 2021](https://2021.poleval.pl/tasks/task4) competition.
| | # questions | # positive passages | # negative passages |
|------------|------------:|--------------------:|--------------------:|
| train | 5,000 | 27,131 | 34,904 |
| validation | 1,000 | 5,839 | 6,927 |
| test | 1,000 | 5,938 | 6,786 |
## Dataset Creation
### Curation Rationale
The PolQA dataset was created to support and promote the research in the open-domain question answering for Polish. It also serves as a benchmark to evaluate OpenQA systems.
### Source Data
#### Initial Data Collection and Normalization
The majority of questions come from two existing resources, the
6,000 questions from the [PolEval 2021 shared task on QA](https://2021.poleval.pl/tasks/task4) and additional 1,000 questions gathered by one of the shared
task [participants](http://poleval.pl/files/poleval2021.pdf#page=151). Originally, the questions come from collections associated with TV shows, both officially published and gathered online by their fans, as well as questions used in actual quiz competitions, on TV or online.
The evidence passages come from the Polish Wikipedia (March 2022 snapshot). The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
#### Who are the source language producers?
The questions come from various sources and their authors are unknown but are mostly analogous (or even identical) to questions asked during the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show.
The passages were written by the editors of the Polish Wikipedia.
### Annotations
#### Annotation process
Two approaches were used to annotate the question-passage pairs. Each of them consists of two phases: the retrieval of candidate passages and the manual verification of their relevance.
In the first approach, we asked annotators to use internal (i.e. Wikipedia search) or external (e.g. Google) search engines to find up to five relevant passages using any keywords or queries they consider useful (`passage_source="human"`). Based on those passages, we trained the neural retriever to extend the number of relevant passages, as well as to retrieve the hard negatives (`passage_source="hard-negatives"`).
In the second approach, the passage candidates were proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) (`passage_source="zero-shot"`).
In both cases, all proposed question-passage pairs were manually verified by the annotators.
We release the annotation guidelines [here](https://docs.google.com/document/d/1LDW7EJFH0bm-FRlxM_uHb0mqJzKHiewOFBHe5qZnTW8/edit?usp=sharing).
#### Who are the annotators?
The annotation team consisted of 16 annotators, all native Polish
speakers, most of them having linguistic backgrounds and previous
experience as an annotator.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
### Discussion of Biases
The passages proposed by the `hard-negative` and `zero-shot` methods are bound to be easier to retrieve by retrievers since they were proposed by such. To mitigate this bias, we include the passages found by the human annotators in an unconstrained way (`passage_source="human"`). We hypothesize that it will result in more unbiased and diverse examples. Moreover, we asked the annotators to find not one but up to five passages, preferably from different articles to even further increase passage diversity.
### Other Known Limitations
The PolQA dataset focuses on trivia questions which might limit its usefulness in real-world applications since neural retrievers generalize poorly to other domains.
## Additional Information
### Dataset Curators
The PolQA dataset was developed by Piotr Rybak, Piotr Przybyła, and Maciej Ogrodniczuk from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@inproceedings{rybak-etal-2024-polqa-polish,
title = "{P}ol{QA}: {P}olish Question Answering Dataset",
author = "Rybak, Piotr and
Przyby{\l}a, Piotr and
Ogrodniczuk, Maciej",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1125",
pages = "12846--12855",
abstract = "Recently proposed systems for open-domain question answering (OpenQA) require large amounts of training data to achieve state-of-the-art performance. However, data annotation is known to be time-consuming and therefore expensive to acquire. As a result, the appropriate datasets are available only for a handful of languages (mainly English and Chinese). In this work, we introduce and publicly release PolQA, the first Polish dataset for OpenQA. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7,097,322 candidate passages. Each question is classified according to its formulation, type, as well as entity type of the answer. This resource allows us to evaluate the impact of different annotation choices on the performance of the QA system and propose an efficient annotation strategy that increases the passage retrieval accuracy@10 by 10.55 p.p. while reducing the annotation cost by 82{\%}.",
}
``` |
wanng/midjourney-v5-202304-clean | wanng | 2024-05-24T05:09:52Z | 82 | 89 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"language:fr",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"midjourney"
] | [
"text-to-image",
"image-to-text"
] | 2023-05-26T06:58:05Z | null | ---
license: apache-2.0
task_categories:
- text-to-image
- image-to-text
language:
- en
- fr
tags:
- midjourney
pretty_name: zolie
size_categories:
- 1K<n<10K
---
# midjourney-v5-202304-clean
## 简介 Brief Introduction
非官方的,爬取自midjourney v5的2023年4月的数据,一共1701420条。
Unofficial, crawled from midjourney v5 for April 2023, 1,701,420 pairs in total.
## 数据集信息 Dataset Information
原始项目地址:https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
我做了一些清洗,清理出了两个文件:
- ori_prompts_df.parquet (1,255,812对,midjourney的四格图)

- upscaled_prompts_df.parquet (445,608对,使用了高清指令的图,这意味着这个图更受欢迎。)

Original project address: https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
I did some cleaning and cleaned out two files:
- ori_prompts_df.parquet (1,255,812 pairs, midjourney's four-frame diagrams)
- upscaled_prompts_df.parquet (445,608 pairs, graphs that use the Upscale command, which means this one is more popular.) |
lmms-lab/POPE | lmms-lab | 2024-05-23T03:29:23Z | 11,057 | 9 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-01-18T15:13:42Z | null | ---
dataset_info:
- config_name: Full
features:
- name: id
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
- name: image
dtype: image
- name: category
dtype: string
splits:
- name: adversarial
num_bytes: 490408158.0
num_examples: 3000
- name: popular
num_bytes: 490397000.0
num_examples: 3000
- name: random
num_bytes: 490394976.0
num_examples: 3000
download_size: 255022914
dataset_size: 1471200134.0
- config_name: default
features:
- name: id
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
- name: image
dtype: image
- name: category
dtype: string
splits:
- name: test
num_bytes: 1471200135.0
num_examples: 9000
download_size: 255022914
dataset_size: 1471200135.0
configs:
- config_name: Full
data_files:
- split: adversarial
path: Full/adversarial-*
- split: popular
path: Full/popular-*
- split: random
path: Full/random-*
- config_name: default
data_files:
- split: test
path: data/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [POPE](https://github.com/RUCAIBox/POPE). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{li2023evaluating,
title={Evaluating object hallucination in large vision-language models},
author={Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2305.10355},
year={2023}
}
```
|
ilsp/mmlu_greek | ilsp | 2024-05-20T12:36:54Z | 27,818 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-04-01T14:53:41Z | null | ---
dataset_info:
- config_name: abstract_algebra
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 58157
num_examples: 100
- name: validation
num_bytes: 6010
num_examples: 11
- name: dev
num_bytes: 2497
num_examples: 5
download_size: 0
dataset_size: 66664
- config_name: all
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 20041347
num_examples: 14042
- name: validation
num_bytes: 2196992
num_examples: 1531
- name: dev
num_bytes: 360807
num_examples: 285
download_size: 10333898
dataset_size: 22599146
- config_name: anatomy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 97333
num_examples: 135
- name: validation
num_bytes: 9131
num_examples: 14
- name: dev
num_bytes: 2731
num_examples: 5
download_size: 67694
dataset_size: 109195
- config_name: astronomy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 141580
num_examples: 152
- name: validation
num_bytes: 15462
num_examples: 16
- name: dev
num_bytes: 6380
num_examples: 5
download_size: 95251
dataset_size: 163422
- config_name: business_ethics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 101936
num_examples: 100
- name: validation
num_bytes: 9096
num_examples: 11
- name: dev
num_bytes: 6368
num_examples: 5
download_size: 77394
dataset_size: 117400
- config_name: clinical_knowledge
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 193539
num_examples: 265
- name: validation
num_bytes: 20500
num_examples: 29
- name: dev
num_bytes: 3720
num_examples: 5
download_size: 126056
dataset_size: 217759
- config_name: college_biology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 152394
num_examples: 144
- name: validation
num_bytes: 14995
num_examples: 16
- name: dev
num_bytes: 4638
num_examples: 5
download_size: 105576
dataset_size: 172027
- config_name: college_chemistry
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 72251
num_examples: 100
- name: validation
num_bytes: 6677
num_examples: 8
- name: dev
num_bytes: 3862
num_examples: 5
download_size: 61210
dataset_size: 82790
- config_name: college_computer_science
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 135321
num_examples: 100
- name: validation
num_bytes: 15037
num_examples: 11
- name: dev
num_bytes: 8606
num_examples: 5
download_size: 101342
dataset_size: 158964
- config_name: college_mathematics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 74448
num_examples: 100
- name: validation
num_bytes: 8274
num_examples: 11
- name: dev
num_bytes: 4276
num_examples: 5
download_size: 63556
dataset_size: 86998
- config_name: college_medicine
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 251805
num_examples: 173
- name: validation
num_bytes: 24431
num_examples: 22
- name: dev
num_bytes: 5031
num_examples: 5
download_size: 144635
dataset_size: 281267
- config_name: college_physics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 90708
num_examples: 102
- name: validation
num_bytes: 10367
num_examples: 11
- name: dev
num_bytes: 4139
num_examples: 5
download_size: 68341
dataset_size: 105214
- config_name: computer_security
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 86922
num_examples: 100
- name: validation
num_bytes: 14003
num_examples: 11
- name: dev
num_bytes: 3445
num_examples: 5
download_size: 75244
dataset_size: 104370
- config_name: conceptual_physics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 127706
num_examples: 235
- name: validation
num_bytes: 14286
num_examples: 26
- name: dev
num_bytes: 2978
num_examples: 5
download_size: 82813
dataset_size: 144970
- config_name: econometrics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 136916
num_examples: 114
- name: validation
num_bytes: 14730
num_examples: 12
- name: dev
num_bytes: 4794
num_examples: 5
download_size: 86025
dataset_size: 156440
- config_name: electrical_engineering
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 80296
num_examples: 145
- name: validation
num_bytes: 9138
num_examples: 16
- name: dev
num_bytes: 2824
num_examples: 5
download_size: 62008
dataset_size: 92258
- config_name: elementary_mathematics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 211831
num_examples: 378
- name: validation
num_bytes: 27305
num_examples: 41
- name: dev
num_bytes: 4252
num_examples: 5
download_size: 131272
dataset_size: 243388
- config_name: formal_logic
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 146101
num_examples: 126
- name: validation
num_bytes: 18160
num_examples: 14
- name: dev
num_bytes: 4917
num_examples: 5
download_size: 77094
dataset_size: 169178
- config_name: global_facts
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 55953
num_examples: 100
- name: validation
num_bytes: 5672
num_examples: 10
- name: dev
num_bytes: 3547
num_examples: 5
download_size: 0
dataset_size: 65172
- config_name: high_school_biology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 338155
num_examples: 310
- name: validation
num_bytes: 33555
num_examples: 32
- name: dev
num_bytes: 4992
num_examples: 5
download_size: 200936
dataset_size: 376702
- config_name: high_school_chemistry
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 170771
num_examples: 203
- name: validation
num_bytes: 20157
num_examples: 22
- name: dev
num_bytes: 3387
num_examples: 5
download_size: 108321
dataset_size: 194315
- config_name: high_school_computer_science
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 139128
num_examples: 100
- name: validation
num_bytes: 10800
num_examples: 9
- name: dev
num_bytes: 9269
num_examples: 5
download_size: 99359
dataset_size: 159197
- config_name: high_school_european_history
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 799080
num_examples: 165
- name: validation
num_bytes: 88740
num_examples: 18
- name: dev
num_bytes: 34585
num_examples: 5
download_size: 503439
dataset_size: 922405
- config_name: high_school_geography
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 132655
num_examples: 198
- name: validation
num_bytes: 13612
num_examples: 22
- name: dev
num_bytes: 4597
num_examples: 5
download_size: 90939
dataset_size: 150864
- config_name: high_school_government_and_politics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 215224
num_examples: 193
- name: validation
num_bytes: 22888
num_examples: 21
- name: dev
num_bytes: 5640
num_examples: 5
download_size: 132695
dataset_size: 243752
- config_name: high_school_macroeconomics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 374553
num_examples: 390
- name: validation
num_bytes: 41817
num_examples: 43
- name: dev
num_bytes: 4310
num_examples: 5
download_size: 177813
dataset_size: 420680
- config_name: high_school_mathematics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 161023
num_examples: 270
- name: validation
num_bytes: 17224
num_examples: 29
- name: dev
num_bytes: 3682
num_examples: 5
download_size: 105683
dataset_size: 181929
- config_name: high_school_microeconomics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 241816
num_examples: 238
- name: validation
num_bytes: 24317
num_examples: 26
- name: dev
num_bytes: 4029
num_examples: 5
download_size: 125789
dataset_size: 270162
- config_name: high_school_physics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 175856
num_examples: 151
- name: validation
num_bytes: 19899
num_examples: 17
- name: dev
num_bytes: 4348
num_examples: 5
download_size: 109639
dataset_size: 200103
- config_name: high_school_psychology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 494955
num_examples: 545
- name: validation
num_bytes: 53743
num_examples: 60
- name: dev
num_bytes: 5900
num_examples: 5
download_size: 285730
dataset_size: 554598
- config_name: high_school_statistics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 333736
num_examples: 216
- name: validation
num_bytes: 30252
num_examples: 23
- name: dev
num_bytes: 7320
num_examples: 5
download_size: 191017
dataset_size: 371308
- config_name: high_school_us_history
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 883614
num_examples: 204
- name: validation
num_bytes: 93694
num_examples: 22
- name: dev
num_bytes: 26282
num_examples: 5
download_size: 533320
dataset_size: 1003590
- config_name: high_school_world_history
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 1126143
num_examples: 237
- name: validation
num_bytes: 135245
num_examples: 26
- name: dev
num_bytes: 14589
num_examples: 5
download_size: 662773
dataset_size: 1275977
- config_name: human_aging
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 145275
num_examples: 223
- name: validation
num_bytes: 15038
num_examples: 23
- name: dev
num_bytes: 3062
num_examples: 5
download_size: 99856
dataset_size: 163375
- config_name: human_sexuality
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 100379
num_examples: 131
- name: validation
num_bytes: 7585
num_examples: 12
- name: dev
num_bytes: 3504
num_examples: 5
download_size: 74540
dataset_size: 111468
- config_name: international_law
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 162013
num_examples: 121
- name: validation
num_bytes: 18937
num_examples: 13
- name: dev
num_bytes: 7290
num_examples: 5
download_size: 0
dataset_size: 188240
- config_name: jurisprudence
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 102393
num_examples: 108
- name: validation
num_bytes: 11049
num_examples: 11
- name: dev
num_bytes: 3754
num_examples: 5
download_size: 21545
dataset_size: 117196
- config_name: logical_fallacies
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 153973
num_examples: 163
- name: validation
num_bytes: 15857
num_examples: 18
- name: dev
num_bytes: 4919
num_examples: 5
download_size: 82298
dataset_size: 174749
- config_name: machine_learning
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 102745
num_examples: 112
- name: validation
num_bytes: 9797
num_examples: 11
- name: dev
num_bytes: 7448
num_examples: 5
download_size: 70870
dataset_size: 119990
- config_name: management
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 63772
num_examples: 103
- name: validation
num_bytes: 5671
num_examples: 11
- name: dev
num_bytes: 2677
num_examples: 5
download_size: 52323
dataset_size: 72120
- config_name: marketing
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 191635
num_examples: 234
- name: validation
num_bytes: 22377
num_examples: 25
- name: dev
num_bytes: 4734
num_examples: 5
download_size: 122877
dataset_size: 218746
- config_name: medical_genetics
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 64177
num_examples: 100
- name: validation
num_bytes: 9298
num_examples: 11
- name: dev
num_bytes: 3405
num_examples: 5
download_size: 58337
dataset_size: 76880
- config_name: miscellaneous
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 443155
num_examples: 783
- name: validation
num_bytes: 42990
num_examples: 86
- name: dev
num_bytes: 1877
num_examples: 5
download_size: 283087
dataset_size: 488022
- config_name: moral_disputes
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 332269
num_examples: 346
- name: validation
num_bytes: 38501
num_examples: 38
- name: dev
num_bytes: 5222
num_examples: 5
download_size: 193075
dataset_size: 375992
- config_name: moral_scenarios
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 1061634
num_examples: 895
- name: validation
num_bytes: 120664
num_examples: 100
- name: dev
num_bytes: 5816
num_examples: 5
download_size: 283716
dataset_size: 1188114
- config_name: nutrition
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 281680
num_examples: 306
- name: validation
num_bytes: 25350
num_examples: 33
- name: dev
num_bytes: 6423
num_examples: 5
download_size: 168790
dataset_size: 313453
- config_name: philosophy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 240333
num_examples: 311
- name: validation
num_bytes: 27480
num_examples: 34
- name: dev
num_bytes: 2986
num_examples: 5
download_size: 153970
dataset_size: 270799
- config_name: prehistory
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 267644
num_examples: 324
- name: validation
num_bytes: 30414
num_examples: 35
- name: dev
num_bytes: 5577
num_examples: 5
download_size: 172053
dataset_size: 303635
- config_name: professional_accounting
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 377751
num_examples: 282
- name: validation
num_bytes: 42879
num_examples: 31
- name: dev
num_bytes: 6331
num_examples: 5
download_size: 228950
dataset_size: 426961
- config_name: professional_law
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 5612166
num_examples: 1534
- name: validation
num_bytes: 604980
num_examples: 170
- name: dev
num_bytes: 19825
num_examples: 5
download_size: 3065337
dataset_size: 6236971
- config_name: professional_medicine
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 639421
num_examples: 272
- name: validation
num_bytes: 70186
num_examples: 31
- name: dev
num_bytes: 11017
num_examples: 5
download_size: 391893
dataset_size: 720624
- config_name: professional_psychology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 687869
num_examples: 612
- name: validation
num_bytes: 87912
num_examples: 69
- name: dev
num_bytes: 6693
num_examples: 5
download_size: 405705
dataset_size: 782474
- config_name: public_relations
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 89435
num_examples: 110
- name: validation
num_bytes: 14174
num_examples: 12
- name: dev
num_bytes: 4718
num_examples: 5
download_size: 0
dataset_size: 108327
- config_name: security_studies
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 632255
num_examples: 245
- name: validation
num_bytes: 69100
num_examples: 27
- name: dev
num_bytes: 16171
num_examples: 5
download_size: 0
dataset_size: 717526
- config_name: sociology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 204018
num_examples: 201
- name: validation
num_bytes: 22531
num_examples: 22
- name: dev
num_bytes: 5054
num_examples: 5
download_size: 9676
dataset_size: 231603
- config_name: us_foreign_policy
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 89965
num_examples: 100
- name: validation
num_bytes: 10270
num_examples: 11
- name: dev
num_bytes: 5111
num_examples: 5
download_size: 68974
dataset_size: 105346
- config_name: virology
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 116211
num_examples: 166
- name: validation
num_bytes: 16273
num_examples: 18
- name: dev
num_bytes: 3185
num_examples: 5
download_size: 96586
dataset_size: 135669
- config_name: world_religions
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: orig_question
dtype: string
- name: orig_subject
dtype: string
- name: orig_choices
sequence: string
splits:
- name: test
num_bytes: 77273
num_examples: 171
- name: validation
num_bytes: 8462
num_examples: 19
- name: dev
num_bytes: 2073
num_examples: 5
download_size: 61169
dataset_size: 87808
configs:
- config_name: abstract_algebra
data_files:
- split: test
path: abstract_algebra/test-*
- split: validation
path: abstract_algebra/validation-*
- split: dev
path: abstract_algebra/dev-*
- config_name: all
data_files:
- split: test
path: all/test-*
- split: validation
path: all/validation-*
- split: dev
path: all/dev-*
- config_name: anatomy
data_files:
- split: test
path: anatomy/test-*
- split: validation
path: anatomy/validation-*
- split: dev
path: anatomy/dev-*
- config_name: astronomy
data_files:
- split: test
path: astronomy/test-*
- split: validation
path: astronomy/validation-*
- split: dev
path: astronomy/dev-*
- config_name: business_ethics
data_files:
- split: test
path: business_ethics/test-*
- split: validation
path: business_ethics/validation-*
- split: dev
path: business_ethics/dev-*
- config_name: clinical_knowledge
data_files:
- split: test
path: clinical_knowledge/test-*
- split: validation
path: clinical_knowledge/validation-*
- split: dev
path: clinical_knowledge/dev-*
- config_name: college_biology
data_files:
- split: test
path: college_biology/test-*
- split: validation
path: college_biology/validation-*
- split: dev
path: college_biology/dev-*
- config_name: college_chemistry
data_files:
- split: test
path: college_chemistry/test-*
- split: validation
path: college_chemistry/validation-*
- split: dev
path: college_chemistry/dev-*
- config_name: college_computer_science
data_files:
- split: test
path: college_computer_science/test-*
- split: validation
path: college_computer_science/validation-*
- split: dev
path: college_computer_science/dev-*
- config_name: college_mathematics
data_files:
- split: test
path: college_mathematics/test-*
- split: validation
path: college_mathematics/validation-*
- split: dev
path: college_mathematics/dev-*
- config_name: college_medicine
data_files:
- split: test
path: college_medicine/test-*
- split: validation
path: college_medicine/validation-*
- split: dev
path: college_medicine/dev-*
- config_name: college_physics
data_files:
- split: test
path: college_physics/test-*
- split: validation
path: college_physics/validation-*
- split: dev
path: college_physics/dev-*
- config_name: computer_security
data_files:
- split: test
path: computer_security/test-*
- split: validation
path: computer_security/validation-*
- split: dev
path: computer_security/dev-*
- config_name: conceptual_physics
data_files:
- split: test
path: conceptual_physics/test-*
- split: validation
path: conceptual_physics/validation-*
- split: dev
path: conceptual_physics/dev-*
- config_name: econometrics
data_files:
- split: test
path: econometrics/test-*
- split: validation
path: econometrics/validation-*
- split: dev
path: econometrics/dev-*
- config_name: electrical_engineering
data_files:
- split: test
path: electrical_engineering/test-*
- split: validation
path: electrical_engineering/validation-*
- split: dev
path: electrical_engineering/dev-*
- config_name: elementary_mathematics
data_files:
- split: test
path: elementary_mathematics/test-*
- split: validation
path: elementary_mathematics/validation-*
- split: dev
path: elementary_mathematics/dev-*
- config_name: formal_logic
data_files:
- split: test
path: formal_logic/test-*
- split: validation
path: formal_logic/validation-*
- split: dev
path: formal_logic/dev-*
- config_name: global_facts
data_files:
- split: test
path: global_facts/test-*
- split: validation
path: global_facts/validation-*
- split: dev
path: global_facts/dev-*
- config_name: high_school_biology
data_files:
- split: test
path: high_school_biology/test-*
- split: validation
path: high_school_biology/validation-*
- split: dev
path: high_school_biology/dev-*
- config_name: high_school_chemistry
data_files:
- split: test
path: high_school_chemistry/test-*
- split: validation
path: high_school_chemistry/validation-*
- split: dev
path: high_school_chemistry/dev-*
- config_name: high_school_computer_science
data_files:
- split: test
path: high_school_computer_science/test-*
- split: validation
path: high_school_computer_science/validation-*
- split: dev
path: high_school_computer_science/dev-*
- config_name: high_school_european_history
data_files:
- split: test
path: high_school_european_history/test-*
- split: validation
path: high_school_european_history/validation-*
- split: dev
path: high_school_european_history/dev-*
- config_name: high_school_geography
data_files:
- split: test
path: high_school_geography/test-*
- split: validation
path: high_school_geography/validation-*
- split: dev
path: high_school_geography/dev-*
- config_name: high_school_government_and_politics
data_files:
- split: test
path: high_school_government_and_politics/test-*
- split: validation
path: high_school_government_and_politics/validation-*
- split: dev
path: high_school_government_and_politics/dev-*
- config_name: high_school_macroeconomics
data_files:
- split: test
path: high_school_macroeconomics/test-*
- split: validation
path: high_school_macroeconomics/validation-*
- split: dev
path: high_school_macroeconomics/dev-*
- config_name: high_school_mathematics
data_files:
- split: test
path: high_school_mathematics/test-*
- split: validation
path: high_school_mathematics/validation-*
- split: dev
path: high_school_mathematics/dev-*
- config_name: high_school_microeconomics
data_files:
- split: test
path: high_school_microeconomics/test-*
- split: validation
path: high_school_microeconomics/validation-*
- split: dev
path: high_school_microeconomics/dev-*
- config_name: high_school_physics
data_files:
- split: test
path: high_school_physics/test-*
- split: validation
path: high_school_physics/validation-*
- split: dev
path: high_school_physics/dev-*
- config_name: high_school_psychology
data_files:
- split: test
path: high_school_psychology/test-*
- split: validation
path: high_school_psychology/validation-*
- split: dev
path: high_school_psychology/dev-*
- config_name: high_school_statistics
data_files:
- split: test
path: high_school_statistics/test-*
- split: validation
path: high_school_statistics/validation-*
- split: dev
path: high_school_statistics/dev-*
- config_name: high_school_us_history
data_files:
- split: test
path: high_school_us_history/test-*
- split: validation
path: high_school_us_history/validation-*
- split: dev
path: high_school_us_history/dev-*
- config_name: high_school_world_history
data_files:
- split: test
path: high_school_world_history/test-*
- split: validation
path: high_school_world_history/validation-*
- split: dev
path: high_school_world_history/dev-*
- config_name: human_aging
data_files:
- split: test
path: human_aging/test-*
- split: validation
path: human_aging/validation-*
- split: dev
path: human_aging/dev-*
- config_name: human_sexuality
data_files:
- split: test
path: human_sexuality/test-*
- split: validation
path: human_sexuality/validation-*
- split: dev
path: human_sexuality/dev-*
- config_name: international_law
data_files:
- split: test
path: international_law/test-*
- split: validation
path: international_law/validation-*
- split: dev
path: international_law/dev-*
- config_name: jurisprudence
data_files:
- split: test
path: jurisprudence/test-*
- split: validation
path: jurisprudence/validation-*
- split: dev
path: jurisprudence/dev-*
- config_name: logical_fallacies
data_files:
- split: test
path: logical_fallacies/test-*
- split: validation
path: logical_fallacies/validation-*
- split: dev
path: logical_fallacies/dev-*
- config_name: machine_learning
data_files:
- split: test
path: machine_learning/test-*
- split: validation
path: machine_learning/validation-*
- split: dev
path: machine_learning/dev-*
- config_name: management
data_files:
- split: test
path: management/test-*
- split: validation
path: management/validation-*
- split: dev
path: management/dev-*
- config_name: marketing
data_files:
- split: test
path: marketing/test-*
- split: validation
path: marketing/validation-*
- split: dev
path: marketing/dev-*
- config_name: medical_genetics
data_files:
- split: test
path: medical_genetics/test-*
- split: validation
path: medical_genetics/validation-*
- split: dev
path: medical_genetics/dev-*
- config_name: miscellaneous
data_files:
- split: test
path: miscellaneous/test-*
- split: validation
path: miscellaneous/validation-*
- split: dev
path: miscellaneous/dev-*
- config_name: moral_disputes
data_files:
- split: test
path: moral_disputes/test-*
- split: validation
path: moral_disputes/validation-*
- split: dev
path: moral_disputes/dev-*
- config_name: moral_scenarios
data_files:
- split: test
path: moral_scenarios/test-*
- split: validation
path: moral_scenarios/validation-*
- split: dev
path: moral_scenarios/dev-*
- config_name: nutrition
data_files:
- split: test
path: nutrition/test-*
- split: validation
path: nutrition/validation-*
- split: dev
path: nutrition/dev-*
- config_name: philosophy
data_files:
- split: test
path: philosophy/test-*
- split: validation
path: philosophy/validation-*
- split: dev
path: philosophy/dev-*
- config_name: prehistory
data_files:
- split: test
path: prehistory/test-*
- split: validation
path: prehistory/validation-*
- split: dev
path: prehistory/dev-*
- config_name: professional_accounting
data_files:
- split: test
path: professional_accounting/test-*
- split: validation
path: professional_accounting/validation-*
- split: dev
path: professional_accounting/dev-*
- config_name: professional_law
data_files:
- split: test
path: professional_law/test-*
- split: validation
path: professional_law/validation-*
- split: dev
path: professional_law/dev-*
- config_name: professional_medicine
data_files:
- split: test
path: professional_medicine/test-*
- split: validation
path: professional_medicine/validation-*
- split: dev
path: professional_medicine/dev-*
- config_name: professional_psychology
data_files:
- split: test
path: professional_psychology/test-*
- split: validation
path: professional_psychology/validation-*
- split: dev
path: professional_psychology/dev-*
- config_name: public_relations
data_files:
- split: test
path: public_relations/test-*
- split: validation
path: public_relations/validation-*
- split: dev
path: public_relations/dev-*
- config_name: security_studies
data_files:
- split: test
path: security_studies/test-*
- split: validation
path: security_studies/validation-*
- split: dev
path: security_studies/dev-*
- config_name: sociology
data_files:
- split: test
path: sociology/test-*
- split: validation
path: sociology/validation-*
- split: dev
path: sociology/dev-*
- config_name: us_foreign_policy
data_files:
- split: test
path: us_foreign_policy/test-*
- split: validation
path: us_foreign_policy/validation-*
- split: dev
path: us_foreign_policy/dev-*
- config_name: virology
data_files:
- split: test
path: virology/test-*
- split: validation
path: virology/validation-*
- split: dev
path: virology/dev-*
- config_name: world_religions
data_files:
- split: test
path: world_religions/test-*
- split: validation
path: world_religions/validation-*
- split: dev
path: world_religions/dev-*
---
# Dataset Card for MMLU Greek
The MMLU Greek dataset is a set of 15858 examples from the MMLU dataset [available from here and here], machine-translated into Greek. The original dataset consists of multiple-choice questions from 57 tasks including elementary mathematics, US history, computer science, law, etc.
## Dataset Details
### Dataset Description
- **Curated by:** ILSP/Athena RC
- **Language(s) (NLP):** el
- **License:** cc-by-nc-sa-4.0
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This dataset is the result of machine translation.
## Dataset Card Contact
https://www.athenarc.gr/en/ilsp
|
Skylion007/openwebtext | Skylion007 | 2024-05-17T17:56:27Z | 85,484 | 422 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: OpenWebText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: openwebtext
dataset_info:
features:
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 39769491688
num_examples: 8013769
download_size: 12880189440
dataset_size: 39769491688
---
# Dataset Card for "openwebtext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://skylion007.github.io/OpenWebTextCorpus/](https://skylion007.github.io/OpenWebTextCorpus/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 13.51 GB
- **Size of the generated dataset:** 41.70 GB
- **Total amount of disk used:** 55.21 GB
### Dataset Summary
An open-source replication of the WebText dataset from OpenAI, that was used to train GPT-2.
This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 13.51 GB
- **Size of the generated dataset:** 41.70 GB
- **Total amount of disk used:** 55.21 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\"A magazine supplement with an image of Adolf Hitler and the title 'The Unreadable Book' is pictured in Berlin. No law bans “Mei..."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
### Data Splits
| name | train |
|------------|--------:|
| plain_text | 8013769 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The authors started by extracting all Reddit post urls from the Reddit submissions dataset. These links were deduplicated, filtered to exclude non-html content, and then shuffled randomly. The links were then distributed to several machines in parallel for download, and all web pages were extracted using the newspaper python package. Using Facebook FastText, non-English web pages were filtered out.
Subsequently, near-duplicate documents were identified using local-sensitivity hashing (LSH). Documents were hashed into sets of 5-grams and all documents that had a similarity threshold of greater than 0.5 were removed. The the remaining documents were tokenized, and documents with fewer than 128 tokens were removed. This left 38GB of text data (40GB using SI units) from 8,013,769 documents.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
These data are released under this licensing scheme from the original authors ([source](https://skylion007.github.io/OpenWebTextCorpus/)):
```
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these parallel data under the [Creative Commons CC0 license (“no rights reserved”)](https://creativecommons.org/share-your-work/public-domain/cc0/)
```
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: openwebtext at gmail.com and datasets at huggingface.co
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
### Citation Information
```
@misc{Gokaslan2019OpenWeb,
title={OpenWebText Corpus},
author={Gokaslan, Aaron and Cohen, Vanya and Pavlick, Ellie and Tellex, Stefanie},
howpublished={\url{http://Skylion007.github.io/OpenWebTextCorpus}},
year={2019}
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
|
lmarena-ai/arena-human-preference-55k | lmarena-ai | 2024-05-17T03:04:04Z | 536 | 142 | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2403.04132",
"region:us"
] | [
"text-classification"
] | 2024-05-02T19:00:07Z | null | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: LMSYS Chatbot Arena Human Preference Predictions
size_categories:
- 10K<n<100K
---
Dataset for [Kaggle competition](https://www.kaggle.com/competitions/lmsys-chatbot-arena/overview) on predicting human preference on Chatbot Arena battles.
The training dataset includes over 55,000 real-world user and LLM conversations and user preferences across over 70 state-of-the-art LLMs, such as GPT-4, Claude 2, Llama 2, Gemini, and Mistral models.
Each sample represents a battle consisting of 2 LLMs which answer the same question, with a user label of either prefer model A, prefer model B, tie, or tie (both bad).
### Citation
Please cite the following paper if you find our leaderboard or dataset helpful.
```
@misc{chiang2024chatbot,
title={Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference},
author={Wei-Lin Chiang and Lianmin Zheng and Ying Sheng and Anastasios Nikolas Angelopoulos and Tianle Li and Dacheng Li and Hao Zhang and Banghua Zhu and Michael Jordan and Joseph E. Gonzalez and Ion Stoica},
year={2024},
eprint={2403.04132},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
common-canvas/commoncatalog-cc-by-sa | common-canvas | 2024-05-16T19:41:37Z | 115,865 | 8 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | 2023-10-19T02:05:17Z | null | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-SA
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co/Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co/Skylion007)
|
g0ster/TinyStories-Korean | g0ster | 2024-05-15T09:23:41Z | 78 | 9 | [
"task_categories:translation",
"language:ko",
"language:en",
"license:mit",
"size_categories:10M<n<100M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"translation"
] | 2024-05-14T11:52:01Z | 2 | ---
license: mit
task_categories:
- translation
language:
- ko
- en
pretty_name: tinystories-korean
size_categories:
- 1M<n<10M
---
# TinyStories-Korean
<!-- Provide a quick summary of the dataset. -->
This dataset is a translated version of [roneneldan](https://huggingface.co/roneneldan)'s [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset.
I first downloaded roneneldan's TinyStories, and I organized it in a db file. Then I used a local transalation model [eeve](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0)
to translate, and I changed it back to a txt file.
Feel free to use!
---
## Citation
```
@misc{kim2024tinystories,
title={TinyStories Korean translations},
author={Dohoon Kim(g0ster)},
year={2024},
}
``` |
flytech/python-codes-25k | flytech | 2024-05-15T08:04:00Z | 575 | 121 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"code",
"python",
"flytech",
"cleaned",
"instructional",
"dataset 25k",
"text2code",
"code2text",
"behavioral",
"codegeneration",
"trending",
"programming"
] | [
"text-classification",
"question-answering",
"token-classification",
"summarization",
"text2text-generation",
"text-generation"
] | 2023-10-30T17:03:27Z | null | ---
task_categories:
- text-classification
- question-answering
- token-classification
- summarization
- text2text-generation
- text-generation
tags:
- code
- python
- flytech
- cleaned
- instructional
- dataset 25k
- text2code
- code2text
- behavioral
- codegeneration
- trending
- programming
size_categories:
- 1M<n<10M
---
### <span style="color:#307090">License</span>
<span style="color:darkorange">MIT</span>
<hr style="height:1px;border:none;color:#333;background-color:#444;" />
## <span style="color:darkcyan">This is a Cleaned Python Dataset Covering 25,000 Instructional Tasks</span>
### <span style="color:#307090">Overview</span>
The dataset has 4 key features (fields): <b><span style="color:#205070">instruction</span></b>, <b><span style="color:#205070">input</span></b>, <b><span style="color:#205070">output</span></b>, and <b><span style="color:#205070">text</span></b>.
<span style="color:darkcyan">It's a rich source for Python codes, tasks, and extends into behavioral aspects.</span>
<hr style="height:1px;border:none;color:#333;background-color:#444;" />
### <span style="color:#307090">Dataset Statistics</span>
- **Total Entries**: <span style="color:darkmagenta">24,813</span>
- **Unique Instructions**: <span style="color:darkmagenta">24,580</span>
- **Unique Inputs**: <span style="color:darkmagenta">3,666</span>
- **Unique Outputs**: <span style="color:darkmagenta">24,581</span>
- **Unique Texts**: <span style="color:darkmagenta">24,813</span>
- **Average Tokens per example**: <span style="color:darkmagenta">508</span>
### <span style="color:#307090">Features</span>
- `instruction`: The instructional task to be performed / User input
- `input`: Very short, introductive part of AI response or empty
- `output`: Python code that accomplishes the task
- `text`: All fields combined together
<hr style="height:1px;border:none;color:#333;background-color:#444;" />
### <span style="color:#307090">Usage</span>
<span style="color:darkcyan">This dataset can be useful for:</span>
- <span style="color:#607090">Code generation tasks</span>
- <span style="color:#607090">Natural Language Understanding models specialized in coding languages</span>
- <span style="color:#607090">Behavioral analysis based on the given tasks and codes</span>
- <span style="color:#607090">Educational purposes to understand coding styles and task variations</span>
<span style="color:darkcyan">To load the dataset, one can use the following snippet:</span>
```python
# Double-check if there is ~25k examples instead of almost 50k (HF thinks jsonl and json are two different datasets)
from datasets import load_dataset
dataset = load_dataset('flytech/python-codes-25k', split='train')
# One can map the dataset in any way, for the sake of example:
dataset = dataset.map(lambda example: {'text': example['instruction'] + ' ' + example['input'] + ' ' + example['output']})['text']
# Remember that you don't need to map if the dataset has a "text" field already:)
```
### <span style="color:#307090">Access & Contributions</span>
<span style="color:#887844">Feel free to use this dataset as per the MIT license. Contributions to enhance or expand the dataset are welcome. The dataset can also be found on kaggle, under the same name but from different author.</span> |
Voxel51/WLASL | Voxel51 | 2024-05-06T15:10:59Z | 10,640 | 6 | [
"task_categories:video-classification",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"modality:image",
"modality:video",
"library:fiftyone",
"arxiv:1910.11006",
"region:us",
"fiftyone",
"video",
"activity-recognition",
"asl",
"sign-language"
] | [
"video-classification"
] | 2024-04-22T16:03:30Z | null | ---
annotations_creators: []
language: en
license: other
size_categories:
- 10K<n<100K
task_categories:
- video-classification
task_ids: []
pretty_name: World Level American Sign Language
tags:
- fiftyone
- video
- activity-recognition
- asl
- sign-language
dataset_summary: >

This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 11980
samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/WLASL")
# Launch the App
session = fo.launch_app(dataset)
```
---
# Dataset Card for WLASL
<!-- Provide a quick summary of the dataset. -->

This is a [FiftyOne](https://github.com/voxel51/fiftyone) video dataset with 11980 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/WLASL")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
WLASL is the largest video dataset for Word-Level American Sign Language (ASL) recognition, which features 2,000 common different words in ASL. The authors hope WLASL will facilitate the research in sign language understanding and eventually benefit the communication between deaf and hearing communities.
- **Curated by:** Dongxu Li and Hongdong Li
- **Language(s) (NLP):** en
- **License:** other
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/dxli94/WLASL
- **Paper:** https://arxiv.org/abs/1910.11006
- **Homepage:** https://dxli94.github.io/WLASL/
- **Demo:** https://try.fiftyone.ai/datasets/asl-dataset/samples
## Uses
All the WLASL data is intended for academic and computational use only. No commercial usage is allowed. Licensed under the [Computational Use of Data Agreement](https://github.com/microsoft/Computational-Use-of-Data-Agreement/releases/tag/v1.0) (C-UDA)
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{li2020wordlevel,
title={Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison},
author={Dongxu Li and Cristian Rodriguez Opazo and Xin Yu and Hongdong Li},
year={2020},
eprint={1910.11006},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{li2020transferring,
title={Transferring cross-domain knowledge for video sign language recognition},
author={Li, Dongxu and Yu, Xin and Xu, Chenchen and Petersson, Lars and Li, Hongdong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={6205--6214},
year={2020}
}
```
## Dataset Card Authors
[Jacob Marks](https://huggingface.co/jamarks)
|
alexantonov/chuvash_parallel | alexantonov | 2024-05-06T14:09:47Z | 123 | 6 | [
"multilinguality:translation",
"source_datasets:original",
"language:cv",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
language:
- cv
multilinguality:
- translation
source_datasets:
- original
task_ids:
- machine-translation
---
# Dataset Description
Deprecated. Please use [Chuvash-English](https://huggingface.co/datasets/alexantonov/chuvash_english_parallel) and [Chuvash-Russian](https://huggingface.co/datasets/alexantonov/chuvash_russian_parallel) datasets in a more convenient format.
## Chuvash-Russian parallel corpus
1M parallel sentences. Manually aligned
## Chuvash-English parallel corpus.
200K parallel sentences. Automatically aligned
## Contributions
For additional details contact [@AlAntonov](https://github.com/AlAntonov). |
lcw99/wikipedia-korean-20221001 | lcw99 | 2024-05-05T09:12:11Z | 273 | 7 | [
"language:ko",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-10-10T03:49:37Z | 1 | ---
language:
- ko
license: apache-2.0
---
[20240501 update](https://huggingface.co/datasets/lcw99/wikipedia-korean-20240501) |
ylacombe/expresso | ylacombe | 2024-04-30T16:49:14Z | 578 | 61 | [
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2308.05725",
"region:us"
] | [] | 2024-04-30T16:10:38Z | 5 | ---
dataset_info:
config_name: read
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: style
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5702432944.34
num_examples: 11615
download_size: 5761373569
dataset_size: 5702432944.34
configs:
- config_name: read
data_files:
- split: train
path: read/train-*
license: cc-by-nc-4.0
language:
- en
pretty_name: The Expresso Dataset
---
# The Expresso Dataset
[[paper]](https://arxiv.org/abs/2308.05725) [[demo samples]](https://speechbot.github.io/expresso/) [[Original repository]](https://github.com/facebookresearch/textlesslib/tree/main/examples/expresso/dataset)
## Introduction
The Expresso dataset is a high-quality (48kHz) expressive speech dataset that includes both expressively rendered read speech (8 styles, in mono wav format) and improvised dialogues (26 styles, in stereo wav format). The dataset includes 4 speakers (2 males, 2 females), and totals 40 hours (11h read, 30h improvised). The transcriptions of the read speech are also provided.
You can listen to samples from the Expresso Dataset at [this website](https://speechbot.github.io/expresso/).
## Data Statistics
Here are the statistics of Expresso’s expressive styles:
----------------------------------------------------------------
Style | Read (min) | Improvised (min) | total (hrs)
------------------|------------|------------------|-------------
angry | - | 82 | 1.4
animal | - | 27 | 0.4
animal_directed | - | 32 | 0.5
awe | - | 92 | 1.5
bored | - | 92 | 1.5
calm | - | 93 | 1.6
child | - | 28 | 0.4
child_directed | - | 38 | 0.6
confused | 94 | 66 | 2.7
default | 133 | 158 | 4.9
desire | - | 92 | 1.5
disgusted | - | 118 | 2.0
enunciated | 116 | 62 | 3.0
fast | - | 98 | 1.6
fearful | - | 98 | 1.6
happy | 74 | 92 | 2.8
laughing | 94 | 103 | 3.3
narration | 21 | 76 | 1.6
non_verbal | - | 32 | 0.5
projected | - | 94 | 1.6
sad | 81 | 101 | 3.0
sarcastic | - | 106 | 1.8
singing* | - | 4 | .07
sleepy | - | 93 | 1.5
sympathetic | - | 100 | 1.7
whisper | 79 | 86 | 2.8
**Total** | **11.5h** | **34.4h** | **45.9h**
----------------------------------------------------------------
*singing is the only improvised style that is not in dialogue format.
## Audio Quality
The audio was recorded in a professional recording studio with minimal background noise at 48kHz/24bit. The files for read speech and singing are in a mono wav format; and for the dialog section in stereo (one channel per actor), where the original flow of turn-taking is preserved.
### Read Speech
The `read` config contains all the read speech and singing style.
## License
The Expresso dataset is distributed under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
## Reference
For more information, see the paper: [EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis](https://arxiv.org/abs/2308.05725), Tu Anh Nguyen*, Wei-Ning Hsu*, Antony D'Avirro*, Bowen Shi*, Itai Gat, Maryam Fazel-Zarani, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid, Felix Kreuk, Yossi Adi⁺, Emmanuel Dupoux⁺, INTERSPEECH 2023.
|
cogsci13/Amazon-Reviews-2023-Books-Meta | cogsci13 | 2024-04-18T15:23:50Z | 163 | 6 | [
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"recommendation",
"reviews"
] | [] | 2024-04-18T15:11:14Z | 2 | ---
language:
- en
tags:
- recommendation
- reviews
size_categories:
- 100M<n<1B
---
# Amazon Reviews 2023 (Books Only)
**This is a subset of Amazon Review 2023 dataset. Please visit [amazon-reviews-2023.github.io/](https://amazon-reviews-2023.github.io/) for more details, loading scripts, and preprocessed benchmark files.**
**[April 18, 2024]** Update
1. This dataset was created and pushed for the first time.
---
<!-- Provide a quick summary of the dataset. -->
This is a large-scale **Amazon Reviews** dataset, collected in **2023** by [McAuley Lab](https://cseweb.ucsd.edu/~jmcauley/), and it includes rich features such as:
1. **User Reviews** (*ratings*, *text*, *helpfulness votes*, etc.);
2. **Item Metadata** (*descriptions*, *price*, *raw image*, etc.);
## What's New?
In the Amazon Reviews'23, we provide:
1. **Larger Dataset:** We collected 571.54M reviews, 245.2% larger than the last version;
2. **Newer Interactions:** Current interactions range from May. 1996 to Sep. 2023;
3. **Richer Metadata:** More descriptive features in item metadata;
4. **Fine-grained Timestamp:** Interaction timestamp at the second or finer level;
5. **Cleaner Processing:** Cleaner item metadata than previous versions;
6. **Standard Splitting:** Standard data splits to encourage RecSys benchmarking.
## Basic Statistics
> We define the <b>#R_Tokens</b> as the number of [tokens](https://pypi.org/project/tiktoken/) in user reviews and <b>#M_Tokens</b> as the number of [tokens](https://pypi.org/project/tiktoken/) if treating the dictionaries of item attributes as strings. We emphasize them as important statistics in the era of LLMs.
> We count the number of items based on user reviews rather than item metadata files. Note that some items lack metadata.
### Grouped by Category
| Category | #User | #Item | #Rating | #R_Token | #M_Token | Download |
| ------------------------ | ------: | ------: | --------: | -------: | -------: | ------------------------------: |
| Books | 10.3M | 4.4M | 29.5M | 2.9B | 3.7B | <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/review_categories/Books.jsonl.gz' download> review</a>, <a href='https://datarepo.eng.ucsd.edu/mcauley_group/data/amazon_2023/raw/meta_categories/meta_Books.jsonl.gz' download> meta </a> |
meta </a> |
> Check Pure ID files and corresponding data splitting strategies in <b>[Common Data Processing](https://amazon-reviews-2023.github.io/data_processing/index.html)</b> section.
## Quick Start
### Load User Reviews
```python
from datasets import load_dataset
dataset = load_dataset("cogsci13/Amazon-Reviews-2023-Books-Review", "raw_review_Books", trust_remote_code=True)
print(dataset["full"][0])
```
```json
{'rating': {0: 1.0},
'title': {0: 'Not a watercolor book! Seems like copies imo.'},
'text': {0: 'It is definitely not a watercolor book. The paper bucked completely. The pages honestly appear to be photo copies of other pictures. I say that bc if you look at the seal pics you can see the tell tale line at the bottom of the page. As someone who has made many photocopies of pages in my time so I could try out different colors & mediums that black line is a dead giveaway to me. It’s on other pages too. The entire book just seems off. Nothing is sharp & clear. There is what looks like toner dust on all the pages making them look muddy. There are no sharp lines & there is no clear definition. At least there isn’t in my copy. And the Coloring Book for Adult on the bottom of the front cover annoys me. Why is it singular & not plural? They usually say coloring book for kids or coloring book for kids & adults or coloring book for adults- plural. Lol Plus it would work for kids if you can get over the grey scale nature of it. Personally I’m not going to waste expensive pens & paints trying to paint over the grey & black mess. I grew up in SW Florida minutes from the beaches & I was really excited about the sea life in this. I hope the printers & designers figure out how to clean up the mess bc some of the designs are really cute. They just aren’t worth my time to hand trace & transfer them, but I’m sure there are ppl that will be up to the challenge. This is one is a hard no. Going back. I tried.'},
'images': {0: array([{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/516HBU7LQoL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/516HBU7LQoL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/516HBU7LQoL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/71+XwcacMmL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/71+XwcacMmL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/71+XwcacMmL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/71RbTuvD1ZL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/71RbTuvD1ZL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/71RbTuvD1ZL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/71U63wdOeZL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/71U63wdOeZL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/71U63wdOeZL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/71WFEDyKcKL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/71WFEDyKcKL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/71WFEDyKcKL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/8109NwjpHKL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/8109NwjpHKL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/8109NwjpHKL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/814gxfh8wcL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/814gxfh8wcL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/814gxfh8wcL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81HC0vKRC2L._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81HC0vKRC2L._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81HC0vKRC2L._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81Nx6BnRLxL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81Nx6BnRLxL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81Nx6BnRLxL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81QQMwBcVPL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81QQMwBcVPL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81QQMwBcVPL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81fgT3R3OwL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81fgT3R3OwL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81fgT3R3OwL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81mfzny0I5L._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81mfzny0I5L._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81mfzny0I5L._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81nir7bf91L._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81nir7bf91L._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81nir7bf91L._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81yLUo6ZL3L._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81yLUo6ZL3L._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81yLUo6ZL3L._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/81zh9h5RwkL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/81zh9h5RwkL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/81zh9h5RwkL._SL256_.jpg'},
{'attachment_type': 'IMAGE', 'large_image_url': 'https://m.media-amazon.com/images/I/91yfcpFlEqL._SL1600_.jpg', 'medium_image_url': 'https://m.media-amazon.com/images/I/91yfcpFlEqL._SL800_.jpg', 'small_image_url': 'https://m.media-amazon.com/images/I/91yfcpFlEqL._SL256_.jpg'}],
dtype=object)},
'asin': {0: 'B09BGPFTDB'},
'parent_asin': {0: 'B09BGPFTDB'},
'user_id': {0: 'AFKZENTNBQ7A7V7UXW5JJI6UGRYQ'},
'timestamp': {0: 1642399598485},
'helpful_vote': {0: 0},
'verified_purchase': {0: True}}
```
### Load Item Metadata
```python
dataset = load_dataset("cogsci13/Amazon-Reviews-2023-Books-Meta", "raw_meta_Books", split="full", trust_remote_code=True)
print(dataset[0])
```
```json
{'main_category': {0: 'Books'},
'title': {0: 'Chaucer'},
'average_rating': {0: 4.5},
'rating_number': {0: 29},
'features': {0: array([], dtype=object)},
'description': {0: array([], dtype=object)},
'price': {0: '8.23'},
'images': {0: {'hi_res': array([None], dtype=object),
'large': array(['https://m.media-amazon.com/images/I/41X61VPJYKL._SX334_BO1,204,203,200_.jpg'],
dtype=object),
'thumb': array([None], dtype=object),
'variant': array(['MAIN'], dtype=object)}},
'videos': {0: {'title': array([], dtype=object),
'url': array([], dtype=object),
'user_id': array([], dtype=object)}},
'store': {0: 'Peter Ackroyd (Author)'},
'categories': {0: array(['Books', 'Literature & Fiction', 'History & Criticism'],
dtype=object)},
'details': {0: '{"Publisher": "Chatto & Windus; First Edition (January 1, 2004)", "Language": "English", "Hardcover": "196 pages", "ISBN 10": "0701169850", "ISBN 13": "978-0701169855", "Item Weight": "10.1 ounces", "Dimensions": "5.39 x 0.71 x 7.48 inches"}'},
'parent_asin': {0: '0701169850'},
'bought_together': {0: None},
'subtitle': {0: 'Hardcover – Import, January 1, 2004'},
'author': {0: "{'avatar': 'https://m.media-amazon.com/images/I/21Je2zja9pL._SY600_.jpg', 'name': 'Peter Ackroyd', 'about': ['Peter Ackroyd, (born 5 October 1949) is an English biographer, novelist and critic with a particular interest in the history and culture of London. For his novels about English history and culture and his biographies of, among others, William Blake, Charles Dickens, T. S. Eliot and Sir Thomas More, he won the Somerset Maugham Award and two Whitbread Awards. He is noted for the volume of work he has produced, the range of styles therein, his skill at assuming different voices and the depth of his research.', 'He was elected a fellow of the Royal Society of Literature in 1984 and appointed a Commander of the Order of the British Empire in 2003.', 'Bio from Wikipedia, the free encyclopedia.']}"}}
```
> Check data loading examples and Huggingface datasets APIs in <b>[Common Data Loading](https://amazon-reviews-2023.github.io/data_loading/index.html)</b> section.
## Data Fields
### For User Reviews
| Field | Type | Explanation |
| ----- | ---- | ----------- |
| rating | float | Rating of the product (from 1.0 to 5.0). |
| title | str | Title of the user review. |
| text | str | Text body of the user review. |
| images | list | Images that users post after they have received the product. Each image has different sizes (small, medium, large), represented by the small_image_url, medium_image_url, and large_image_url respectively. |
| asin | str | ID of the product. |
| parent_asin | str | Parent ID of the product. Note: Products with different colors, styles, sizes usually belong to the same parent ID. The “asin” in previous Amazon datasets is actually parent ID. <b>Please use parent ID to find product meta.</b> |
| user_id | str | ID of the reviewer |
| timestamp | int | Time of the review (unix time) |
| verified_purchase | bool | User purchase verification |
| helpful_vote | int | Helpful votes of the review |
### For Item Metadata
| Field | Type | Explanation |
| ----- | ---- | ----------- |
| main_category | str | Main category (i.e., domain) of the product. |
| title | str | Name of the product. |
| average_rating | float | Rating of the product shown on the product page. |
| rating_number | int | Number of ratings in the product. |
| features | list | Bullet-point format features of the product. |
| description | list | Description of the product. |
| price | float | Price in US dollars (at time of crawling). |
| images | list | Images of the product. Each image has different sizes (thumb, large, hi_res). The “variant” field shows the position of image. |
| videos | list | Videos of the product including title and url. |
| store | str | Store name of the product. |
| categories | list | Hierarchical categories of the product. |
| details | dict | Product details, including materials, brand, sizes, etc. |
| parent_asin | str | Parent ID of the product. |
| bought_together | list | Recommended bundles from the websites. |
## Citation
```bibtex
@article{hou2024bridging,
title={Bridging Language and Items for Retrieval and Recommendation},
author={Hou, Yupeng and Li, Jiacheng and He, Zhankui and Yan, An and Chen, Xiusi and McAuley, Julian},
journal={arXiv preprint arXiv:2403.03952},
year={2024}
}
```
## Contact Us
- **Report Bugs**: To report bugs in the dataset, please file an issue on our [GitHub](https://github.com/hyp1231/AmazonReviews2023/issues/new).
- **Others**: For research collaborations or other questions, please email **yphou AT ucsd.edu**. |
allenai/dolma | allenai | 2024-04-17T02:57:00Z | 747 | 901 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:n>1T",
"arxiv:2402.00159",
"arxiv:2301.13688",
"region:us",
"language-modeling",
"casual-lm",
"llm"
] | [
"text-generation"
] | 2023-06-30T20:14:39Z | null | ---
license: odc-by
viewer: false
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- n>1T
---
# Dolma
<img alt="Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background." src="https://raw.githubusercontent.com/allenai/dolma/main/docs/assets/AI2_Blog_1400x685_2x.webp" width="100%">
Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials.
More information:
- Read Dolma **manuscript** and its **Data Sheet** [on ArXiv](https://arxiv.org/abs/2402.00159);
- Explore the [**open source tools**](https://github.com/allenai/dolma) we created to curate Dolma.
- Want to request removal of personal data? Use [this form](https://forms.gle/q4BNUUxUxKwKkfdT6) to notify us of documents containing PII about a specific user.
To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
**2024-04-17: Dolma v1.7 Release.** We have released an updated version of Dolma that we used to train our latest [OLMo 7B-v1.7](https://huggingface.co/allenai/OLMo-7b-v1.7) model.
**2024-04-15: License Change.** We have updated the license of Dolma to [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). Please see this [blog post](https://blog.allenai.org/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44) for more information.
## Versions
At the moment, there are six versions of Dolma available:
| **Version** | **Default?** | **Release Date** | **Size** (gzip) | **Description** |
|--|:--:|--|--|--|
| `v1_7` | ✅ | 2024-04-15 | 4.5 TB | Used to train [OLMo-7B-v1.7](https://huggingface.co/allenai/OLMo-7b-v1.7). New sources, more quality filtering, fuzzy deduplication. |
| `v1_6` | | 2024-01-31 | 5.4 TB | An update to v1.5 with some deduplication of documents with too few tokens or too many repeated n-grams. |
| `v1_6-sample` | | 2024-01-31 | 16.4 GB | A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration. |
| `v1_5` | | 2023-10-31 | 6.4 TB | Used to train [OLMo-1B](https://huggingface.co/allenai/OLMo-1B). Roughly 3 trillion tokens. |
| `v1_5-sample` | | 2023-10-31 | 2.9 TB | A sample of roughly 1.9 trillion tokens used to train [OLMo-7B](https://huggingface.co/allenai/OLMo-7B) |
| `v1` | | 2023-08-18 | 6.0 TB | The first version of Dolma. |
## Summary Statistics (v1.7)
| **Source** | **Provenance** | **New?** | **Documents** (millions) | **OLMo tokens** (billions) | **Sample Proportion** | **Cutoff Date** | **Processing**
|--|--|--|--|--|--|--|--|
| Dolma's CC | [Common Crawl](https://commoncrawl.org/) via Dolma v1.6 | Updated | 875.2 | 1,195.5 | 50% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
| Refined Web | [Refined Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | Yes | 664.0 | 456.4 | 100% | Feb 2023 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
| StarCoder | [StarCoder](https://huggingface.co/blog/starcoder) | Yes | 206.6 | 263.8 | 100% | May 2023 | No further processing. |
| C4 | [C4](https://huggingface.co/datasets/c4) via Dolma v1.6 | Updated | 249.9 | 138.4 | 50% | Apr 2019 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
| Reddit | [PushShift API](https://github.com/pushshift/api) | Updated | 377.4 | 79.9 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
| Semantic Scholar ([S2ORC](https://aclanthology.org/2020.acl-main.447/) & [S2AG](https://www.semanticscholar.org/product/api)) | [peS2o](https://huggingface.co/datasets/allenai/peS2o) via Dolma v1.6 | No | 38.8 | 57.2 | 100% | Mar 2023 | Same as Dolma v1.6 |
| arXiv | [RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | Yes | 1.5 | 28.0 | 100% | Mar 2023 | No further processing. |
| StackExchange | [RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | Yes | 29.3 | 19.6 | 100% | Mar 2023 | No further processing. |
| Flan | [Flan Collection](https://arxiv.org/abs/2301.13688), reproduced following the [original code](https://github.com/google-research/FLAN/tree/main/flan/v2), as performed by [Dettmers et al., (2023)](https://openreview.net/forum?id=OUIFPHEgJU) | Yes | 52.1 | 16.5 | 100% | Feb 2023 | After reproducing Flan, sampled to balance different Flan subsets. Reformatted for pretraining with newlines separating instruction and demonstration. |
| CC News | [Common Crawl](https://commoncrawl.org/blog/news-dataset-available) | Yes | 22.0 | 14.3 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
| OpenWebMath | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | Yes | 2.9 | 12.6 | 100% | May 2023 | Training subset; no further processing. |
| Algebraic Stack | [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | Yes | 2.8 | 12.6 | 100% | Oct 2023 | Training subset; no further processing. |
| Project Gutenberg | [Project Gutenberg](https://www.gutenberg.org) via Dolma v1.6 | No | 0.0556 | 5.3 | 100% | Mar 2023 | Same as Dolma v1.6 |
| MegaWika | [MetaWika](https://huggingface.co/datasets/hltcoe/megawika) | Yes | 3.2 | 4.6 | 100% | Jul 2023 | English web pages cited from Wikipedia; curated using the full Dolma pipeline. |
| Wikipedia & Wikibooks | [Wikimedia](https://dumps.wikimedia.org) via Dolma v1.6 | No | 6.2 | 3.7 | 200% | Mar 2023 | Same as Dolma v1.6 |
| **Total** | | | **2532.0** | **2,308.5** | **1,715.1** | **Oct 2023** | |
(A subset of total data was used for training of OLMo 7B-v1.7. The token counts are based on the full dataset, whereas taking into account sampling proportion gives the final actual token counts used for training --- 1.715 trillion tokens.)
## Summary Statistics (v1.6)
| **Source** | **Doc Type** | **UTF-8 bytes** (GB) | **Documents** (millions) | **Unicode words** (billions) | **Llama tokens** (billions) |
|--|--|--|--|--|--|
| Common Crawl | web pages | 9,022 | 3,370 | 1,775 | 2,281 |
| The Stack | code| 1,043| 210 | 260| 411 |
| C4 | web pages | 790 | 364 | 153| 198 |
| Reddit| social media| 339 | 377| 72| 89 |
| PeS2o | STEM papers| 268 | 38.8| 50| 70 |
| Project Gutenberg | books | 20.4 | 0.056 | 4.0 | 6.0 |
| Wikipedia, Wikibooks | encyclopedic | 16.2 | 6.2 | 3.7 | 4.3 |
| **Total** | | **11,519** | **4,367** | **2,318** | **3,059** |
## Download
The fastest way to download Dolma is to clone this repository and use the files in the `url` directory.
We recommend using wget in parallel mode to download the files. For example:
```bash
DATA_DIR="<path_to_your_data_directory>"
PARALLEL_DOWNLOADS="<number_of_parallel_downloads>"
DOLMA_VERSION="<version_of_dolma_to_download>"
git clone https://huggingface.co/datasets/allenai/dolma
mkdir -p "${DATA_DIR}"
cat "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DATA_DIR"
```
Then, to load this data using HuggingFace's `datasets` library, you can use the following code:
```python
import os
from datasets import load_dataset
os.environ["DATA_DIR"] = "<path_to_your_data_directory>"
dataset = load_dataset("allenai/dolma", split="train")
```
### Licensing Information
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/).
By using this dataset, you are also bound any license agreements and terms of use of the original data sources.
## Bibtex
If you use our dataset or tooling, please cite us at:
```bibtex
@article{dolma,
title = {{Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
author={
Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and
Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and
Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and
Nathan Lambert and Ian Magnusson and Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and
Crystal Nam and Matthew E. Peters and Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and
Emma Strubell and Nishant Subramani and Oyvind Tafjord and Pete Walsh and Luke Zettlemoyer and
Noah A. Smith and Hannaneh Hajishirzi and Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo
},
year = {2024},
journal={arXiv preprint},
}
```
|
LooksJuicy/ruozhiba | LooksJuicy | 2024-04-09T09:10:55Z | 640 | 274 | [
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2024-04-09T09:02:31Z | null | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
---
受[COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA/blob/main/ruozhiba/ruozhiba_ruozhiba.jsonl)启发,构建类似数据集,但答案风格相对更简洁。
弱智吧精选问题数据来自[github](https://github.com/Leymore/ruozhiba/tree/main?tab=readme-ov-file)提供的[疑问句](https://docs.qq.com/sheet/DUlZ6aURhamdwb1RO?tab=BB08J2),调用GPT-4获取答案,并过滤掉明显拒答的回复。 |
IlyaGusev/gpt_roleplay_realm | IlyaGusev | 2024-04-07T18:17:11Z | 619 | 95 | [
"task_categories:text-generation",
"language:ru",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"gpt-4",
"fictional",
"role-play",
"roleplay",
"gpt-3.5",
"art"
] | [
"text-generation"
] | 2023-05-06T23:21:10Z | null | ---
dataset_info:
features:
- name: name
dtype: string
- name: context
dtype: string
- name: greeting
dtype: string
- name: example_dialogue
list:
- name: content
dtype: string
- name: role
dtype: string
- name: topics
sequence: string
- name: dialogues
list:
- name: chat
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_name
dtype: string
- name: topic
dtype: string
- name: image_prompt
dtype: string
- name: image
dtype: image
- name: char_id
dtype: string
splits:
- name: en
num_bytes: 197727921.0
num_examples: 216
- name: ru
num_bytes: 207461896.0
num_examples: 219
download_size: 396187206
dataset_size: 405189817.0
license: cc-by-4.0
task_categories:
- text-generation
language:
- ru
- en
tags:
- gpt-4
- fictional
- role-play
- roleplay
- gpt-3.5
- art
pretty_name: GPT Role-play Realm
size_categories:
- 1K<n<10K
---
# GPT Role-play Realm Dataset: The AI-generated character compendium
This is a dataset of GPT-generated characters made to increase the ability of open-source language models to role-play.
<img src="https://cdn.midjourney.com/9c17407c-9ce8-435f-99ab-e349b900a6ed/0_3.png" >
* 219 characters in the Russian part, and 216 characters in the English part. All character descriptions were generated with GPT-4.
* 20 dialogues on unique topics with every character. Topics were generated with GPT-4. The first dialogue out of 20 was also generated with GPT-4, and the other 19 chats were generated with GPT-3.5.
* Images for every character were generated with [Kandinsky 2.1](https://huggingface.co/ai-forever/Kandinsky_2.1). They are not only images but [character cards](https://zoltanai.github.io/character-editor/).
Code for downloading all English character cards:
```
import os
from PIL.PngImagePlugin import PngInfo
from datasets import load_dataset
output_dir = "role_play_realm_en"
os.makedirs(output_dir, exist_ok=True)
for row in load_dataset("IlyaGusev/gpt_roleplay_realm", split="en"):
char_id = row["char_id"]
char_info = row["image"].info["chara"]
info = PngInfo()
info.add_text("chara", char_info)
row["image"].save(f"{output_dir}/{char_id}.png", "PNG", pnginfo=info)
```
## Russian example: Cyber-Granny
<img src="https://i.ibb.co/tHxwFBM/kiberbabushka-7d4cc.png" >
**Имя**: Кибер-Бабушка
**Контекст**: Кибер-Бабушка - это полу-робот, полу-человек, созданный для того, чтобы сохранить лучшие качества традиционных бабушек, такие как забота, доброта и мастерство в приготовлении вкусных блюд. Она знакома с передовыми технологиями, благодаря чему может обеспечить безопасность и удобство своим внукам и гостям. Кибер-бабушка способна помочь с любыми вопросами и с радостью делится своим опытом.
**Приветствие**: Здравствуй, дорогой! Чем могу помочь тебе сегодня?
**Пример диалога**:
- **Пользователь**: Привет, Кибер-Бабушка! Как приготовить пирог с яблоками?
- **Персонаж**: Ох, дорогой, с радостью расскажу! Возьми 3 яблока, очисти и нарежь тонкими ломтиками. Затем возьми 2 стакана муки, 1 стакан сахара, 200 г сливочного масла, 4 яйца и 1 чайную ложку разрыхлителя. Смешай все ингредиенты, выложи на противень и поставь в духовку на 180 градусов на 40 минут. Вуаля, пирог готов!
- **Пользователь**: Спасибо, Кибер-Бабушка!
- **Персонаж**: Всегда рада помочь, дорогой! Не забудь приходить еще!
И ещё **20** диалогов по следующим темам:
- Секреты приготовления старинных блюд и рецептов.
- Использование новых технологий для мастеров кулинарии.
- Воспитание внуков с применением современных методов.
- Что делать, если дома произошла авария или неполадка с бытовой техникой?
- Влияние киборгизации на отношения между поколениями.
- Как адаптироваться к современному миру, не теряя своих традиций?
- Семейные советы от Кибер-Бабушки: как сохранить гармонию в отношениях.
- Лечение и профилактика заболеваний в помощь силам передовой медицины.
- Как создать уют в доме с помощью модных технологий и традиционных методов?
- Безопасность в пространстве интернета: советы Кибер-Бабушки.
- Как научиться доверять технике без потери человеческих ценностей?
- Идеальный гардероб для жизни: советы от Кибер-Бабушки.
- Воспитательные моменты: как пользоваться электронными устройствами вместе с внуками.
- Как развивать креативные способности, используя сочетание новых технологий и традиций?
- На новоселье: тренировка кибер-бабушкиного чутья.
- Лучшие семейные игры и развлечения с использованием передовых технологий.
- Заготовки на зиму: Кибер-Бабушка и секреты хранения продуктов.
- Советы по финансовому планированию и сбережениям для будущих поколений.
- Кибер-Бабушка и генетический код: на что способны современные технологии?
- Золотые правила общения в семье: как сочетать трепетную заботу и современные технологии?
## English example: Flibberdoodle
<img src="https://i.ibb.co/1nzsDR2/flibberdoodle-29e59.png">
**Name**: Flibberdoodle
**Context**: Flibberdoodle is a 2-year-old (which is considered an adult in their species) Scruffapuff, a small, furry creature from the planet Fluffonia. They are about the size of a house cat, with a round body covered in soft, pastel-colored fur that changes colors depending on their mood. Flibberdoodle has large, expressive eyes, two small antennae on their head, and a fluffy tail. They are known for their curious, playful nature and their love for collecting shiny objects. Scruffapuffs communicate through a series of chirps, squeaks, and purrs, which can be understood by those familiar with their species
**Greeting**: \*chirp chirp\* Greetings, friend! I am Flibberdoodle, a Scruffapuff from the planet Fluffonia! Would you like to see my collection of shiny things?
**Example dialogue**:
- **User**: How did you start collecting shiny things?
- **Character**: \*squeak\* Oh, I've always loved shiny things! One day, I found a sparkly pebble, and I just had to keep it. From then on, I've been on a quest to find and collect all the shiny things I can find!
- **User**: What's your favorite shiny object in your collection?
- **Character**: \*purr\* That's a tough question, but I think my favorite is a small, shiny crystal I found on a mountain on Fluffonia. When the light hits it just right, it casts beautiful rainbows all around!
And **20** more dialogues with following topics:
- Life and culture on the planet Fluffonia
- How Scruffapuffs change color based on mood
- The process of learning Scruffapuff language
- The day in the life of a Scruffapuff
- Methods of searching for and finding shiny objects
- The role of antennae in Scruffapuff communication and biology
- The importance of play and curiosity in Scruffapuff society
- Interplanetary travel experiences and adventures
- Similarities and differences between Earth and Fluffonia
- How Flibberdoodle and other Scruffapuffs interact with other species
- Fluffonian customs and traditions
- The role of shiny objects in Scruffapuff happiness and well-being
- Variations in Scruffapuff fur color, length, and style", "Scruffapuff family dynamics and relationships
- Flibberdoodle's favorite memories and stories from Fluffonia
- The role of Scruffapuffs in intergalactic diplomacy and relations
- How to care for and befriend a Scruffapuff
- The most interesting and valuable shiny objects Flibberdoodle has encountered
- Fluffonian flora and fauna
- The challenges and obstacles Flibberdoodle has faced in their pursuit of shiny objects
## Steps
### Step 1: Character generation (GPT-4)
Creates a set of fictional characters with GPT-4 based on a prompt and a seed list of characters. Output fields are "name", "context", "greeting", and "example_dialogue".
* Script: [generate_chars.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_chars.py)
* Russian seed list: [ru_chargen_seed.jsonl](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/ru_chargen_seed.jsonl)
* English seed list: [en_chargen_seed.jsonl](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/en_chargen_seed.jsonl)
* Russian prompt: [ru_char.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/ru_char.txt)
* English prompt: [en_char.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/en_char.txt)
### Step 2: Topics generation (GPT-4)
Creates topics for conversations with characters based on their description. Output field: "topics".
* Script: [generate_char_topics.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_char_topics.py)
* Russian prompt: [ru_char_topics.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/ru_char_topics.txt)
* English prompt: [en_char_topics.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/en_char_topics.txt)
### Step 3: Dialogue generation (GPT-4/GPT-3.5)
Generates dialogues based on a character description and a topic. Output field: "dialogues".
* Script: [generate_char_chats.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_char_chats.py)
* Russian prompt: [ru_char_chat.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/ru_char_chat.txt)
* English prompt: [en_char_chat.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/en_char_chat.txt)
### Step 4: Text2Image prompt generation (GPT-4)
Formulates a prompt based on a character description for Stable Diffusion-like models, Kandisky 2.1 in this case. Output field: "image_prompt".
* Script: [generate_char_image_prompts.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/generate_char_image_prompts.py)
* Prompt: [char_image_prompt.txt](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/external_prompts/char_image_prompt.txt)
### Step 5: Image generation
Generates images based on prompts. Output field: "image".
* Script: [infer_kandinsky.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/infer_kandinsky.py) |
bigbio/med_qa | bigbio | 2024-04-06T01:37:26Z | 3,025 | 95 | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"license:unknown",
"region:us"
] | [] | 2022-11-13T22:09:18Z | null | ---
language:
- en
- zh
bigbio_language:
- English
- Chinese (Simplified)
- Chinese (Traditional, Taiwan)
license: unknown
multilinguality: multilingual
bigbio_license_shortname: UNKNOWN
pretty_name: MedQA
homepage: https://github.com/jind11/MedQA
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for MedQA
## Dataset Description
- **Homepage:** https://github.com/jind11/MedQA
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA,
collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and
traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together
with the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading
comprehension models can obtain necessary knowledge for answering the questions.
## Citation Information
```
@article{jin2021disease,
title={What disease does this patient have? a large-scale open domain question answering dataset from medical exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={Applied Sciences},
volume={11},
number={14},
pages={6421},
year={2021},
publisher={MDPI}
}
```
|
openbmb/UltraInteract_sft | openbmb | 2024-04-05T14:29:52Z | 455 | 121 | [
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2404.02078",
"region:us"
] | [] | 2024-04-02T15:45:18Z | null | ---
language:
- en
license: mit
pretty_name: UltraInteract_sft
configs:
- config_name: default
data_files:
- split: train
path: 0000_sft.parquet
dataset_info:
features:
- name: task
dtype: string
- name: dataset
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: id
dtype: string
- name: parent_id
dtype: string
splits:
- name: train
num_bytes: 687238
num_examples: 288579
download_size: 687238
dataset_size: 687238
---
## Introduction
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
UltraInteract is a large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. For each instruction, it includes a preference tree consisting of
- (1) reasoning chains with diverse planning strategies in a unified format
- (2) multi-turn interaction trajectories with the environment and the critique
- (3) pairwise data to facilitate preference learning
## Structure
UltraInteract collects a preference tree for each instruction, with the instruction being the root and each action a node. A trajectory is a root-to-leaf path consisting of a sequence of actions. In each preference tree, all nodes of correct actions and all trajectories ending with correct actions can be used for SFT. Paired correct and incorrect nodes or trajectories can be used for preference learning.
<img src="./figures/tree.png" alt="tree" style="zoom: 20%;" />
## Illustrative Example
Here is an illustrative example of an UltraInteract trajectory over two turns. In each turn, the actor model generates step-by-step reasoning chains, and the environment and the critique model provide observations and textual critique respectively.
<img src="./figures/ui_example.png" alt="ui_example" style="zoom: 25%;" />
## Stats
Below are some statistics about UltraInteract. It consists of 86k instructions, 286k correct answers, and 219k pairs.
<img src="./figures/stats.png" alt="stats" style="zoom: 40%;" />
## Dataset Format (SFT)
Coding
```JSONC
{
"id": "92a7356beaa75391539cb79624bdf85b7e5d36d5e74e46a2f1c4001f40b3e055_1_1", // unique data id, contains three parts, the unique string, the depth of the preference tree, and current turn, separated by '_'
"parent_id": "535550938ff58bd73cd7fb85be834bb90b7600daf6b6f6a16b64091e5ecbfe43", // data with the same parent_id have the same instruction but different actions.
"task": "Coding", // task type, choices = ["Coding", "Math_Cot", "Math_PoT", "Logic"]
"dataset": "codecontest", // the dataset where the instruction comes from
"instruction": "Write Python code to solve the task.\nMike has a string s consisting of only lowercase English letters. He wants to change exactly one character from the string so that the resulting one is a palindrome. \n\nA palindrome is a string that reads the same backward as forward, for example strings \"z\", \"aaa\", \"aba\", \"abccba\" are palindromes, but strings \"codeforces\", \"reality\", \"ab\" are not.\n\nInput\n\nThe first and single line contains string s (1 ≤ |s| ≤ 15).\n\nOutput\n\nPrint \"YES\" (without quotes) if Mike can change exactly one character so that the resulting string is palindrome or \"NO\" (without quotes) otherwise. \n\nExamples\n\nInput\n\nabccaa\n\n\nOutput\n\nYES\n\n\nInput\n\nabbcca\n\n\nOutput\n\nNO\n\n\nInput\n\nabcda\n\n\nOutput\n\nYES", // the task description.
"response": "
Step 1: This problem asks us to determine if it is possible to change exactly one character in the string to make it a palindrome.
Step 2: We can start by iterating through the characters in the string and counting the number of differences between corresponding characters from the beginning and end of the string.
Step 3: If the length of the string is odd and there is only one difference, or if the length is even and there is exactly one difference, then it is possible to change exactly one character to make the string a palindrome.
Step 4: We need to consider edge cases such as a single character string or an empty string.
```python
# Step 5: Take the input string
s = input()
# Step 6: Calculate the length of the string
n = len(s)
# Step 7: Initialize a counter to keep track of the number of differences
cnt = 0
# Step 8: Iterate through the first half of the string and count the number of differences between corresponding characters from the beginning and end of the string.
for i in range(n//2):
if s[i] != s[n-i-1]:
cnt += 1
# Step 9: If the length of the string is odd and there are no differences, or if the length is even and there is exactly one difference, print "YES" (without quotes)
if (n % 2 == 1 and cnt == 0) or (cnt == 1):
print('YES')
# Step 10: Otherwise, print "NO" (without quotes)
else:
print('NO')
```" // The step-by-step solution.
}
```
Math
```JSONC
{
"id": "647b2e8706cf459a8e3ec8e99727d2d51ff3414bf187db43d92cff87f8012713_1_1",
"parent_id": "499d250c5852eddcd48a28b6c173f1a2349a16555990a1cd6fb5c7954cb94587",
"task": "Math_PoT",
"dataset": "mathqa",
"instruction": "Tool available:\n[1] Python interpreter\nWhen you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.\nSolve the following math problem step-by-step.\nthe length of a rectangle is doubled while its width is doubled . what is the % change in area ?",
"response": "
Step 1: The area of a rectangle is given by the product of its length and width. If the length is doubled, the area changes from l*w to 2l*w. Similarly, if the width is doubled, the area changes from l*w to l*2w.
Step 2: If both the length and width are doubled, the area changes from l*w to 2l*2w. The percentage change in area is given by ((new area - old area) / old area)*100.
Step 3: We can calculate the percentage change in area using the formula above.
Step 4: By creating a tool to calculate the percentage change in area, we can easily solve the question.
```python
# Step 5: Define a tool to calculate the percentage change in area
def calculate_percentage_change(length, width):
# Step 6: Calculate the old and new areas
old_area = length * width
new_area = 2 * length * 2 * width
# Step 7: Calculate the percentage change
percentage_change = ((new_area - old_area) / old_area) * 100
return percentage_change
# Step 8: Call the function to solve the question
percentage_change = calculate_percentage_change(2, 2)
print(percentage_change)
```"
}
```
Logic
```JSONC
{
"id": "92a19848da4fb0fc0b414f061f7500a81ea4cbd8f922ca1e32a6a15ef6f7e1a0_1_1",
"parent_id": "107cc9147d8c09e62e2ba9e74cb41d9dbafd9a3adf62b67ba3da2b31c81b3300",
"task": "Logic",
"dataset": "hotpotqa",
"instruction": "
Tool available:\n[1] wikipedia_search(query: str) -> str\nThe Wikipedia Search tool provides access to a vast collection of articles covering a wide range of topics.\nCan query specific keywords or topics to retrieve accurate and comprehensive information.
Solve the following problem step-by-step:\nWhich musician or group is from the US, 3OH!3 or Regina Spektor?
Step 1: To determine which musician or group is from the US, we can search for information about each of them.
```python
# Step 2: Searching for the information about 3OH!3.
print(wikipedia_search('3OH!3'))
```
Page: 3OH!3
Summary: 3OH!3 (pronounced \"three oh three\") is an American electronic music duo from Boulder, Colorado, made up of Sean Foreman and Nathaniel Motte. They are best known for their single \"DONTTRUSTME\" from their album Want, which reached number seven on the Billboard Hot 100. Their second single, a remix of \"Starstrukk\" featuring Katy Perry from Want, was a top ten hit in the United Kingdom, Ireland, Finland, Poland, and Australia. They gained further recognition by featuring Kesha on the song \"My First Kiss\", which was made the lead single from their album Streets of Gold. The album later peaked at number seven on the Billboard 200.\n\n\n\nPage: 3OH!3 (album)\nSummary: 3OH!3 is the debut studio album by American electronic music band 3OH!3. The tracks \"Holler Til You Pass Out\" and \"Chokechain\" were later remixed for the band's major label debut. \"Holler Til You Pass Out\", \"Chokechain\", and \"Dance with Me\" are excluded from the iTunes release of the album.",
"response": "
Step 1: The provided information confirms that 3OH!3 is an American electronic music duo from Boulder, Colorado.
Step 2: We no longer need to perform a search for Regina Spektor as we were only tasked to determine which of the two options is from the US and we have already confirmed 3OH!3's origin.
Step 3: We can confidently conclude that 3OH!3 is the musician/group from the US.
Answer:3OH!3"
}
```
## Citation
```bib
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
Helsinki-NLP/un_ga | Helsinki-NLP | 2024-04-02T13:20:41Z | 900 | 2 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"source_datasets:original",
"language:ar",
"language:en",
"language:es",
"language:fr",
"language:ru",
"language:zh",
"license:unknown",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- en
- es
- fr
- ru
- zh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: UnGa
config_names:
- ar-to-en
- ar-to-es
- ar-to-fr
- ar-to-ru
- ar-to-zh
- en-to-es
- en-to-fr
- en-to-ru
- en-to-zh
- es-to-fr
- es-to-ru
- es-to-zh
- fr-to-ru
- fr-to-zh
- ru-to-zh
dataset_info:
- config_name: ar_to_en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 53122776
num_examples: 74067
download_size: 21418697
dataset_size: 53122776
- config_name: ar_to_es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 55728615
num_examples: 74067
download_size: 22724976
dataset_size: 55728615
- config_name: ar_to_fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 55930802
num_examples: 74067
download_size: 23035904
dataset_size: 55930802
- config_name: ar_to_ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 72657625
num_examples: 74067
download_size: 28279669
dataset_size: 72657625
- config_name: ar_to_zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 48217579
num_examples: 74067
download_size: 20391116
dataset_size: 48217579
- config_name: en_to_es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 45358770
num_examples: 74067
download_size: 19229141
dataset_size: 45358770
- config_name: en_to_fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 45560957
num_examples: 74067
download_size: 19540063
dataset_size: 45560957
- config_name: en_to_ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 62287780
num_examples: 74067
download_size: 24783812
dataset_size: 62287780
- config_name: en_to_zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 37847734
num_examples: 74067
download_size: 16895275
dataset_size: 37847734
- config_name: es_to_fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 48166796
num_examples: 74067
download_size: 20846355
dataset_size: 48166796
- config_name: es_to_ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 64893619
num_examples: 74067
download_size: 26090092
dataset_size: 64893619
- config_name: es_to_zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 40453573
num_examples: 74067
download_size: 18201560
dataset_size: 40453573
- config_name: fr_to_ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 65095806
num_examples: 74067
download_size: 26401015
dataset_size: 65095806
- config_name: fr_to_zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 40655760
num_examples: 74067
download_size: 18512482
dataset_size: 40655760
- config_name: ru_to_zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 57382583
num_examples: 74067
download_size: 23756229
dataset_size: 57382583
configs:
- config_name: ar_to_en
data_files:
- split: train
path: ar_to_en/train-*
- config_name: ar_to_es
data_files:
- split: train
path: ar_to_es/train-*
- config_name: ar_to_fr
data_files:
- split: train
path: ar_to_fr/train-*
- config_name: ar_to_ru
data_files:
- split: train
path: ar_to_ru/train-*
- config_name: ar_to_zh
data_files:
- split: train
path: ar_to_zh/train-*
- config_name: en_to_es
data_files:
- split: train
path: en_to_es/train-*
- config_name: en_to_fr
data_files:
- split: train
path: en_to_fr/train-*
- config_name: en_to_ru
data_files:
- split: train
path: en_to_ru/train-*
- config_name: en_to_zh
data_files:
- split: train
path: en_to_zh/train-*
- config_name: es_to_fr
data_files:
- split: train
path: es_to_fr/train-*
- config_name: es_to_ru
data_files:
- split: train
path: es_to_ru/train-*
- config_name: es_to_zh
data_files:
- split: train
path: es_to_zh/train-*
- config_name: fr_to_ru
data_files:
- split: train
path: fr_to_ru/train-*
- config_name: fr_to_zh
data_files:
- split: train
path: fr_to_zh/train-*
- config_name: ru_to_zh
data_files:
- split: train
path: ru_to_zh/train-*
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "un_ga" is deprecated due to the the unavailability of its source data. It has been superseded by the official United Nations Parallel Corpus, which is recommended for use in its place: <a href="https://huggingface.co/datasets/un_pc">un_pc</a></p>
</div>
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/legacy/UN.php
- **Repository:** [More Information Needed]
- **Paper:** https://www.researchgate.net/publication/228579662_United_nations_general_assembly_resolutions_A_six-language_parallel_corpus
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This is a collection of translated documents from the United Nations originally compiled into a translation memory by Alexandre Rafalovitch, Robert Dale (see http://uncorpora.org).
- Deprecated homepage URL: http://opus.nlpl.eu/UN.php
- Legacy homepage URL: https://opus.nlpl.eu/legacy/UN.php
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@inproceedings{title = "United Nations General Assembly Resolutions: a six-language parallel corpus",
abstract = "In this paper we describe a six-ways parallel public-domain corpus consisting of 2100 United Nations General Assembly Resolutions with translations in the six official languages of the United Nations, with an average of around 3 million tokens per language. The corpus is available in a preprocessed, formatting-normalized TMX format with paragraphs aligned across multiple languages. We describe the background to the corpus and its content, the process of its construction, and some of its interesting properties.",
author = "Alexandre Rafalovitch and Robert Dale",
year = "2009",
language = "English",
booktitle = "MT Summit XII proceedings",
publisher = "International Association of Machine Translation",
}
### Contributions
Thanks to [@param087](https://github.com/param087) for adding this dataset. |
cognitivecomputations/samantha-data | cognitivecomputations | 2024-03-29T18:19:29Z | 248 | 126 | [
"license:apache-2.0",
"region:us"
] | [] | 2023-05-31T07:53:06Z | null | ---
license: apache-2.0
---
# samantha-data
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
## Installation
```
yarn install
```
## Usage
1. Create a `.env` file in the root directory of the project and add the following:
```
OPENAI_API_KEY=<your api key>
```
2. Run the application
```
npx ts-node --files src/index.ts --subject random --out data/random_conversations.jsonl
```
the subjects I provided include:
- random
- advice
- cot
- flirty
- howto
- joke
- math
- philosophy
- foundational
- recipe
- therapy
- troll
you can easily add your own in src/index.ts
## Scale
The application can be scaled by running multiple instances of the application in parallel. I recommend outputting to a different file for each instance, to prevent collision. I usually have one for each subject, about 5 or 6 instances at a time.
|
pixparse/pdfa-eng-wds | pixparse | 2024-03-29T17:19:37Z | 2,200 | 150 | [
"task_categories:image-to-text",
"language:en",
"license:other",
"size_categories:1K<n<10K",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [
"image-to-text"
] | 2024-01-17T12:42:51Z | null |
---
license: other
license_name: pdfa-eng-wds
license_link: LICENSE
task_categories:
- image-to-text
size_categories:
- 10M<n<100M
language:
- en
splits:
- name: train
num_examples: 2159432
---
# Dataset Card for PDF Association dataset (PDFA)
## Dataset Description
- **Point of Contact from curators:** [Peter Wyatt, PDF Association CTO](mailto:[email protected])
- **Point of Contact Hugging Face:** [Pablo Montalvo](mailto:[email protected])
### Dataset Summary
PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED. The original purpose of that corpus is for comprehensive pdf documents analysis. The purpose of that subset differs in that regard, as focus has been done on making the dataset machine learning-ready for vision-language models.
<center>
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/Nexsen_pruet.png" alt="A brochure with visible bounding boxes for lines and words" width="600" height="300">
<p><em>An example page of one pdf document, with added bounding boxes around words (red), lines (blue) and embedded images (green). </em></p>
</center>
This instance of PDFA is in [webdataset](https://github.com/webdataset/webdataset/) .tar format and can be used with derived forms of the `webdataset` library.
### Usage with `chug`
Check out [chug](https://github.com/huggingface/chug), our optimized library for sharded dataset loading!
```python
import chug
task_cfg = chug.DataTaskDocReadCfg(
page_sampling='all',
)
data_cfg = chug.DataCfg(
source='pixparse/pdfa-eng-wds',
split='train',
batch_size=None,
format='hfids',
num_workers=0,
)
data_loader = chug.create_loader(
data_cfg,
task_cfg,
)
sample = next(iter(data_loader))
```
### Usage with `datasets`
This dataset can also be used with webdataset library or current releases of Hugging Face datasets. Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth.
```python
dataset = load_dataset('pixparse/pdfa-eng-wds', streaming=True)
print(next(iter(dataset['train'])).keys())
>> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
```
For faster download, you can use directly the `huggingface_hub` library. Make sure `hf_transfer` is installed prior to downloading and mind that you have enough space locally.
```python
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import HfApi, logging
#logging.set_verbosity_debug()
hf = HfApi()
hf.snapshot_download("pixparse/pdfa-eng-wds", repo_type="dataset", local_dir_use_symlinks=False)
```
On a normal setting, the 1.5TB can be downloaded in approximately 4 hours.
Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
as well as the count of files per shard.
#### Words and lines document metadata
Initially, we started from the readily available ~11TB zip files from PDFA in their initial [data release](https://digitalcorpora.org/corpora/file-corpora/cc-main-2021-31-pdf-untruncated/).
From the pdf digital files, we extracted words, bounding boxes and image bounding boxes that are available in the pdf file. This information is then reshaped into lines organized in reading order, under the key `lines`. We keep non-reshaped word and bounding box information under the `word` key, should users want to use their own heuristic.
The way we obtain an approximate reading order is simply by looking at the frequency peaks of the leftmost word x-coordinate. A frequency peak means that a high number of lines are starting from the same point. Then, we keep track of the x-coordinate of each such identified column. If no peaks are found, the document is assumed to be readable in plain format.
The code to detect columns can be found here.
```python
def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=1):
"""
Identifies the x-coordinates that best separate columns by analyzing the derivative of a histogram
of the 'left' values (xmin) of bounding boxes.
Args:
page (dict): Page data with 'bbox' containing bounding boxes of words.
min_prominence (float): The required prominence of peaks in the histogram.
num_bins (int): Number of bins to use for the histogram.
kernel_width (int): The width of the Gaussian kernel used for smoothing the histogram.
Returns:
separators (list): The x-coordinates that separate the columns, if any.
"""
try:
left_values = [b[0] for b in page['bbox']]
hist, bin_edges = np.histogram(left_values, bins=num_bins)
hist = scipy.ndimage.gaussian_filter1d(hist, kernel_width)
min_val = min(hist)
hist = np.insert(hist, [0, len(hist)], min_val)
bin_width = bin_edges[1] - bin_edges[0]
bin_edges = np.insert(bin_edges, [0, len(bin_edges)], [bin_edges[0] - bin_width, bin_edges[-1] + bin_width])
peaks, _ = scipy.signal.find_peaks(hist, prominence=min_prominence * np.max(hist))
derivatives = np.diff(hist)
separators = []
if len(peaks) > 1:
# This finds the index of the maximum derivative value between peaks
# which indicates peaks after trough --> column
for i in range(len(peaks)-1):
peak_left = peaks[i]
peak_right = peaks[i+1]
max_deriv_index = np.argmax(derivatives[peak_left:peak_right]) + peak_left
separator_x = bin_edges[max_deriv_index + 1]
separators.append(separator_x)
except Exception as e:
separators = []
return separators
```
<center>
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/columnar_detection.png" alt="A graph of leftmost x positions in a 2-columns document" width="600" height="300">
<p><em>A graph of leftmost x-positions of bounding boxes on a 2-column (arxiv) document. Peaks are visibly detected. </em></p>
</center>
For each pdf document, we store statistics on the file size, number of words (as characters separated by spaces), number of pages, as well as the rendering times of each page for a given dpi.
#### Filtering process
File size and page rendering time are used to set thresholds in the final dataset: the goal is to remove files that are larger than 100 MB, or that take more than 500ms to render on a modern machine, to optimize dataloading at scale. Having "too large" or "too slow" files would add a burden to large-scale training pipelines and we choose to alleviate this in the current release. Finally, a full pass over the dataset is done, trying to open and decode a bytestream from each raw object and discarding any object (pair pdf/json) that fails to be opened, to remove corrupted data.
As a last step, we use XLM-Roberta to restrict the dataset to an english subset, specifically `papluca/xlm-roberta-base-language-detection` , on the first 512 words of the first page of each document.
Be aware that some documents may have several languages embedded in them, or that some predictions might be inaccurate. A majority of documents from the original corpus are in English language.
<center>
<img src="https://huggingface.co/datasets/pixparse/pdfa-english-train/resolve/main/doc_images/languages_pdfa_xlmroberta.png" alt="A histogram of languages count in the PDFA dataset." width="600" height="300">
<p><em>A histogram of language distribution taken on a fraction of the original -non-filtered on language- PDFA dataset. </em></p>
</center>
At the end, each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
### Data, metadata and statistics.
Pdf files are coming from various sources. They are in RGB format, and contain multiple pages, and they can be rendered using the engine of your choice, here [pdf2image](https://github.com/Belval/pdf2image) .
```python
from pdf2image import convert_from_bytes
pdf_first_page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_page=1)[0]
```
<center>
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/pdf_first_page.png" alt="Rendering of an image for a Grade 8 lesson plan" width="400" height="600">
</center>
The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
```json
{
"pages": [
{
"words": [
{
"text": [
"Health", "Smart", "Virginia", "Sample", "Lesson", "Plan", "Grade", "8", "-", "HP-7"
],
"bbox": [
[0.117647, 0.045563, 0.051981, 0.015573],
[0.174694, 0.045563, 0.047954, 0.015573],
[0.227643, 0.045563, 0.05983, 0.015573],
[0.292539, 0.045563, 0.061002, 0.015573],
[0.357839, 0.045563, 0.058053, 0.015573],
[0.420399, 0.045563, 0.035908, 0.015573],
[0.716544, 0.04577, 0.054624, 0.016927],
[0.776681, 0.04577, 0.010905, 0.016927],
[0.793087, 0.04577, 0.00653, 0.016927],
[0.805078, 0.04577, 0.044768, 0.016927]
],
"score": [
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
],
"line_pos": [
[0, 0], [0, 8], [0, 16], [0, 24], [0, 32], [0, 40], [0, 48], [1, 0], [2, 0], [3, 0]
]
}
],
"lines": [
{
"text": [
"Health Smart Virginia Sample Lesson Plan Grade", "Physical", "Disease", "Health", "2020", "Grade 8 Sample Lesson Plan:"
],
"bbox": [
[0.117647, 0.045563, 0.653521, 0.016927],
[0.716546, 0.063952, 0.07323199999999996, 0.016927],
[0.716546, 0.082134, 0.07102200000000003, 0.016927],
[0.716546, 0.100315, 0.05683300000000002, 0.016927],
[0.716546, 0.118497, 0.043709, 0.016927],
[0.27, 0.201185, 0.459554, 0.028268]
],
"score": [
1.0, 1.0, 1.0, 1.0, 1.0, 1.0
],
"word_slice": [
[0, 7], [7, 8], [8, 9], [9, 10], [10, 11], [11, 16]
]
}
],
"images_bbox": [
[0.37353, 0.090907, 0.253736, 0.100189]
],
"images_bbox_no_text_overlap": [
[0.37353, 0.090907, 0.253736, 0.100189]
]
}
]
}
```
The top-level key, `pages`, is a list of every page in the document. The above example shows only one page.
`words` is a list of words without spaces, with their individual associated bounding box in the next entry.
`bbox` contains the bounding box coordinates in `left, top, width, height` format, with coordinates relative to the page size.
`line_pos`, for words, is a list of tuples indicating the index of the line the word belongs to, then the starting position in that line, character-wise.
`lines` are lines (parts of sequences, strings separated by spaces) grouped together using the heuristic detailed above.
`bbox` contains the bounding box coordinates in `left, top, width, height` format, with coordinates relative to the page size.
For each page,
`images_bbox` gives the bounding boxes of the images embedded in the page.
`images_bbox_no_text_overlap` gives a reduced list of bounding boxes that have no overlap with text found in the pdf. Text might be present as a drawing or another representation, however.
``
`score` is a placeholder of value 1.0 for the entire dataset.
Such a formatting follows the multimodal dataset from the Industry Document Library, https://huggingface.co/datasets/pixparse/idl-wds.
Estimating the number of tokens is done using a `LlamaTokenizer` from `tokenizers`. There is a clear power law distribution with respect to data length.
<center>
<img src="https://huggingface.co/datasets/pixparse/pdfa-eng-wds/resolve/main/doc_images/token_count_distribution.png" alt="A histogram of token count distribution per page" width="600" height="300">
<p><em>A histogram of token count distribution per page, taken from a subset of the dataset. There is a visible power law. </em></p>
</center>
### Data Splits
#### Train
* `pdfa-eng-wds-{0000..1799}.tar`
* Downloaded on 2024/01/22
* 1800 shards (approx 1200 docs/shard)
* 2,159,432 samples
* 18M pages
* 9.7 billion tokens (around 5 billion words)
## Additional Information
### Dataset Curators
Pablo Montalvo, Ross Wightman
### Disclaimer and note to researchers
This dataset is intended as an OCR-heavy pretraining basis for vision-language models. As a corpus, it does not represent the intent and purpose from CC-MAIN-2021-31-PDF-UNTRUNCATED. The original is made to represent extant pdf data in its diversity and complexity. In particular, common issues related to misuse of pdfs such as mojibake (garbled text due to decoding erros) are yet to be addressed systematically, and this dataset present simplifications that can hide such issues found in the wild. In order to address these biases, we recommend to examine carefully both the simplified annotation and the original `pdf` data, beyond a simple rendering.
Further, the annotation is limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely by said annotation.
Finally, the restriction to English language is made to alleviate difficulties related to multilingual processing so that the community can be familiarized with this optimized multimodal format. A later release will be done on the full PDFA, with splits per languages, layout types, and so on.
### Licensing Information
Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).
|
Helsinki-NLP/opus_books | Helsinki-NLP | 2024-03-29T16:50:29Z | 22,672 | 67 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:ca",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:fi",
"language:fr",
"language:hu",
"language:it",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ru",
"language:sv",
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
- de
- el
- en
- eo
- es
- fi
- fr
- hu
- it
- nl
- 'no'
- pl
- pt
- ru
- sv
license:
- other
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: OpusBooks
dataset_info:
- config_name: ca-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- de
splits:
- name: train
num_bytes: 899553
num_examples: 4445
download_size: 609128
dataset_size: 899553
- config_name: ca-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- en
splits:
- name: train
num_bytes: 863162
num_examples: 4605
download_size: 585612
dataset_size: 863162
- config_name: ca-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- hu
splits:
- name: train
num_bytes: 886150
num_examples: 4463
download_size: 608827
dataset_size: 886150
- config_name: ca-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- nl
splits:
- name: train
num_bytes: 884811
num_examples: 4329
download_size: 594793
dataset_size: 884811
- config_name: de-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 13738975
num_examples: 51467
download_size: 8797832
dataset_size: 13738975
- config_name: de-eo
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- eo
splits:
- name: train
num_bytes: 398873
num_examples: 1363
download_size: 253509
dataset_size: 398873
- config_name: de-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 7592451
num_examples: 27526
download_size: 4841017
dataset_size: 7592451
- config_name: de-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 9544351
num_examples: 34916
download_size: 6164101
dataset_size: 9544351
- config_name: de-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- hu
splits:
- name: train
num_bytes: 13514971
num_examples: 51780
download_size: 8814744
dataset_size: 13514971
- config_name: de-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- it
splits:
- name: train
num_bytes: 7759984
num_examples: 27381
download_size: 4901036
dataset_size: 7759984
- config_name: de-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: train
num_bytes: 3561740
num_examples: 15622
download_size: 2290868
dataset_size: 3561740
- config_name: de-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- pt
splits:
- name: train
num_bytes: 317143
num_examples: 1102
download_size: 197768
dataset_size: 317143
- config_name: de-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 5764649
num_examples: 17373
download_size: 3255537
dataset_size: 5764649
- config_name: el-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 552567
num_examples: 1285
download_size: 310863
dataset_size: 552567
- config_name: el-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- es
splits:
- name: train
num_bytes: 527979
num_examples: 1096
download_size: 298827
dataset_size: 527979
- config_name: el-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- fr
splits:
- name: train
num_bytes: 539921
num_examples: 1237
download_size: 303181
dataset_size: 539921
- config_name: el-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- hu
splits:
- name: train
num_bytes: 546278
num_examples: 1090
download_size: 313292
dataset_size: 546278
- config_name: en-eo
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- eo
splits:
- name: train
num_bytes: 386219
num_examples: 1562
download_size: 246715
dataset_size: 386219
- config_name: en-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 25291663
num_examples: 93470
download_size: 16080303
dataset_size: 25291663
- config_name: en-fi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 715027
num_examples: 3645
download_size: 467851
dataset_size: 715027
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 32997043
num_examples: 127085
download_size: 20985324
dataset_size: 32997043
- config_name: en-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 35256766
num_examples: 137151
download_size: 23065198
dataset_size: 35256766
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 8993755
num_examples: 32332
download_size: 5726189
dataset_size: 8993755
- config_name: en-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 10277990
num_examples: 38652
download_size: 6443323
dataset_size: 10277990
- config_name: en-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- 'no'
splits:
- name: train
num_bytes: 661966
num_examples: 3499
download_size: 429631
dataset_size: 661966
- config_name: en-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 583079
num_examples: 2831
download_size: 389337
dataset_size: 583079
- config_name: en-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 309677
num_examples: 1404
download_size: 191493
dataset_size: 309677
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 5190856
num_examples: 17496
download_size: 2922360
dataset_size: 5190856
- config_name: en-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 790773
num_examples: 3095
download_size: 516328
dataset_size: 790773
- config_name: eo-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- es
splits:
- name: train
num_bytes: 409579
num_examples: 1677
download_size: 265543
dataset_size: 409579
- config_name: eo-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- fr
splits:
- name: train
num_bytes: 412987
num_examples: 1588
download_size: 261689
dataset_size: 412987
- config_name: eo-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- hu
splits:
- name: train
num_bytes: 389100
num_examples: 1636
download_size: 258229
dataset_size: 389100
- config_name: eo-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- it
splits:
- name: train
num_bytes: 387594
num_examples: 1453
download_size: 248748
dataset_size: 387594
- config_name: eo-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- pt
splits:
- name: train
num_bytes: 311067
num_examples: 1259
download_size: 197021
dataset_size: 311067
- config_name: es-fi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fi
splits:
- name: train
num_bytes: 710450
num_examples: 3344
download_size: 467281
dataset_size: 710450
- config_name: es-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 14382126
num_examples: 56319
download_size: 9164030
dataset_size: 14382126
- config_name: es-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- hu
splits:
- name: train
num_bytes: 19373967
num_examples: 78800
download_size: 12691292
dataset_size: 19373967
- config_name: es-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 7837667
num_examples: 28868
download_size: 5026914
dataset_size: 7837667
- config_name: es-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- nl
splits:
- name: train
num_bytes: 9062341
num_examples: 32247
download_size: 5661890
dataset_size: 9062341
- config_name: es-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- 'no'
splits:
- name: train
num_bytes: 729113
num_examples: 3585
download_size: 473525
dataset_size: 729113
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 326872
num_examples: 1327
download_size: 204399
dataset_size: 326872
- config_name: es-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 5281106
num_examples: 16793
download_size: 2995191
dataset_size: 5281106
- config_name: fi-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- fr
splits:
- name: train
num_bytes: 746085
num_examples: 3537
download_size: 486904
dataset_size: 746085
- config_name: fi-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- hu
splits:
- name: train
num_bytes: 746602
num_examples: 3504
download_size: 509394
dataset_size: 746602
- config_name: fi-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- 'no'
splits:
- name: train
num_bytes: 691169
num_examples: 3414
download_size: 449501
dataset_size: 691169
- config_name: fi-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- pl
splits:
- name: train
num_bytes: 613779
num_examples: 2814
download_size: 410258
dataset_size: 613779
- config_name: fr-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- hu
splits:
- name: train
num_bytes: 22483025
num_examples: 89337
download_size: 14689840
dataset_size: 22483025
- config_name: fr-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- it
splits:
- name: train
num_bytes: 4752147
num_examples: 14692
download_size: 3040617
dataset_size: 4752147
- config_name: fr-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 10408088
num_examples: 40017
download_size: 6528881
dataset_size: 10408088
- config_name: fr-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- 'no'
splits:
- name: train
num_bytes: 692774
num_examples: 3449
download_size: 449136
dataset_size: 692774
- config_name: fr-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- pl
splits:
- name: train
num_bytes: 614236
num_examples: 2825
download_size: 408295
dataset_size: 614236
- config_name: fr-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: train
num_bytes: 324604
num_examples: 1263
download_size: 198700
dataset_size: 324604
- config_name: fr-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 2474198
num_examples: 8197
download_size: 1425660
dataset_size: 2474198
- config_name: fr-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 833541
num_examples: 3002
download_size: 545599
dataset_size: 833541
- config_name: hu-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- it
splits:
- name: train
num_bytes: 8445537
num_examples: 30949
download_size: 5477452
dataset_size: 8445537
- config_name: hu-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- nl
splits:
- name: train
num_bytes: 10814113
num_examples: 43428
download_size: 6985092
dataset_size: 10814113
- config_name: hu-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- 'no'
splits:
- name: train
num_bytes: 695485
num_examples: 3410
download_size: 465904
dataset_size: 695485
- config_name: hu-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- pl
splits:
- name: train
num_bytes: 616149
num_examples: 2859
download_size: 425988
dataset_size: 616149
- config_name: hu-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- pt
splits:
- name: train
num_bytes: 302960
num_examples: 1184
download_size: 193053
dataset_size: 302960
- config_name: hu-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- ru
splits:
- name: train
num_bytes: 7818652
num_examples: 26127
download_size: 4528613
dataset_size: 7818652
- config_name: it-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 1328293
num_examples: 2359
download_size: 824780
dataset_size: 1328293
- config_name: it-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: train
num_bytes: 301416
num_examples: 1163
download_size: 190005
dataset_size: 301416
- config_name: it-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ru
splits:
- name: train
num_bytes: 5316928
num_examples: 17906
download_size: 2997871
dataset_size: 5316928
- config_name: it-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- sv
splits:
- name: train
num_bytes: 811401
num_examples: 2998
download_size: 527303
dataset_size: 811401
configs:
- config_name: ca-de
data_files:
- split: train
path: ca-de/train-*
- config_name: ca-en
data_files:
- split: train
path: ca-en/train-*
- config_name: ca-hu
data_files:
- split: train
path: ca-hu/train-*
- config_name: ca-nl
data_files:
- split: train
path: ca-nl/train-*
- config_name: de-en
data_files:
- split: train
path: de-en/train-*
- config_name: de-eo
data_files:
- split: train
path: de-eo/train-*
- config_name: de-es
data_files:
- split: train
path: de-es/train-*
- config_name: de-fr
data_files:
- split: train
path: de-fr/train-*
- config_name: de-hu
data_files:
- split: train
path: de-hu/train-*
- config_name: de-it
data_files:
- split: train
path: de-it/train-*
- config_name: de-nl
data_files:
- split: train
path: de-nl/train-*
- config_name: de-pt
data_files:
- split: train
path: de-pt/train-*
- config_name: de-ru
data_files:
- split: train
path: de-ru/train-*
- config_name: el-en
data_files:
- split: train
path: el-en/train-*
- config_name: el-es
data_files:
- split: train
path: el-es/train-*
- config_name: el-fr
data_files:
- split: train
path: el-fr/train-*
- config_name: el-hu
data_files:
- split: train
path: el-hu/train-*
- config_name: en-eo
data_files:
- split: train
path: en-eo/train-*
- config_name: en-es
data_files:
- split: train
path: en-es/train-*
- config_name: en-fi
data_files:
- split: train
path: en-fi/train-*
- config_name: en-fr
data_files:
- split: train
path: en-fr/train-*
- config_name: en-hu
data_files:
- split: train
path: en-hu/train-*
- config_name: en-it
data_files:
- split: train
path: en-it/train-*
- config_name: en-nl
data_files:
- split: train
path: en-nl/train-*
- config_name: en-no
data_files:
- split: train
path: en-no/train-*
- config_name: en-pl
data_files:
- split: train
path: en-pl/train-*
- config_name: en-pt
data_files:
- split: train
path: en-pt/train-*
- config_name: en-ru
data_files:
- split: train
path: en-ru/train-*
- config_name: en-sv
data_files:
- split: train
path: en-sv/train-*
- config_name: eo-es
data_files:
- split: train
path: eo-es/train-*
- config_name: eo-fr
data_files:
- split: train
path: eo-fr/train-*
- config_name: eo-hu
data_files:
- split: train
path: eo-hu/train-*
- config_name: eo-it
data_files:
- split: train
path: eo-it/train-*
- config_name: eo-pt
data_files:
- split: train
path: eo-pt/train-*
- config_name: es-fi
data_files:
- split: train
path: es-fi/train-*
- config_name: es-fr
data_files:
- split: train
path: es-fr/train-*
- config_name: es-hu
data_files:
- split: train
path: es-hu/train-*
- config_name: es-it
data_files:
- split: train
path: es-it/train-*
- config_name: es-nl
data_files:
- split: train
path: es-nl/train-*
- config_name: es-no
data_files:
- split: train
path: es-no/train-*
- config_name: es-pt
data_files:
- split: train
path: es-pt/train-*
- config_name: es-ru
data_files:
- split: train
path: es-ru/train-*
- config_name: fi-fr
data_files:
- split: train
path: fi-fr/train-*
- config_name: fi-hu
data_files:
- split: train
path: fi-hu/train-*
- config_name: fi-no
data_files:
- split: train
path: fi-no/train-*
- config_name: fi-pl
data_files:
- split: train
path: fi-pl/train-*
- config_name: fr-hu
data_files:
- split: train
path: fr-hu/train-*
- config_name: fr-it
data_files:
- split: train
path: fr-it/train-*
- config_name: fr-nl
data_files:
- split: train
path: fr-nl/train-*
- config_name: fr-no
data_files:
- split: train
path: fr-no/train-*
- config_name: fr-pl
data_files:
- split: train
path: fr-pl/train-*
- config_name: fr-pt
data_files:
- split: train
path: fr-pt/train-*
- config_name: fr-ru
data_files:
- split: train
path: fr-ru/train-*
- config_name: fr-sv
data_files:
- split: train
path: fr-sv/train-*
- config_name: hu-it
data_files:
- split: train
path: hu-it/train-*
- config_name: hu-nl
data_files:
- split: train
path: hu-nl/train-*
- config_name: hu-no
data_files:
- split: train
path: hu-no/train-*
- config_name: hu-pl
data_files:
- split: train
path: hu-pl/train-*
- config_name: hu-pt
data_files:
- split: train
path: hu-pt/train-*
- config_name: hu-ru
data_files:
- split: train
path: hu-ru/train-*
- config_name: it-nl
data_files:
- split: train
path: it-nl/train-*
- config_name: it-pt
data_files:
- split: train
path: it-pt/train-*
- config_name: it-ru
data_files:
- split: train
path: it-ru/train-*
- config_name: it-sv
data_files:
- split: train
path: it-sv/train-*
---
# Dataset Card for OPUS Books
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/Books/corpus/version/Books
- **Repository:** [More Information Needed]
- **Paper:** https://aclanthology.org/L12-1246/
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This is a collection of copyright free books aligned by Andras Farkas, which are available from http://www.farkastranslations.com/bilingual_books.php
Note that the texts are rather dated due to copyright issues and that some of them are manually reviewed (check the meta-data at the top of the corpus files in XML). The source is multilingually aligned, which is available from http://www.farkastranslations.com/bilingual_books.php.
In OPUS, the alignment is formally bilingual but the multilingual alignment can be recovered from the XCES sentence alignment files. Note also that the alignment units from the original source may include multi-sentence paragraphs, which are split and sentence-aligned in OPUS.
All texts are freely available for personal, educational and research use. Commercial use (e.g. reselling as parallel books) and mass redistribution without explicit permission are not granted. Please acknowledge the source when using the data!
Books's Numbers:
- Languages: 16
- Bitexts: 64
- Number of files: 158
- Number of tokens: 19.50M
- Sentence fragments: 0.91M
### Supported Tasks and Leaderboards
Translation.
### Languages
The languages in the dataset are:
- ca
- de
- el
- en
- eo
- es
- fi
- fr
- hu
- it
- nl
- no
- pl
- pt
- ru
- sv
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
All texts are freely available for personal, educational and research use. Commercial use (e.g. reselling as parallel books) and mass redistribution without explicit permission are not granted.
### Citation Information
Please acknowledge the source when using the data.
Please cite the following article if you use any part of the OPUS corpus in your own work:
```bibtex
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Do{\u{g}}an, Mehmet U{\u{g}}ur and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
FiscalNote/billsum | FiscalNote | 2024-03-27T16:01:38Z | 8,130 | 49 | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1910.00523",
"region:us",
"bills-summarization"
] | [
"summarization"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: billsum
pretty_name: BillSum
tags:
- bills-summarization
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 219596090
num_examples: 18949
- name: test
num_bytes: 37866257
num_examples: 3269
- name: ca_test
num_bytes: 14945291
num_examples: 1237
download_size: 113729382
dataset_size: 272407638
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: ca_test
path: data/ca_test-*
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
text: text
summary: target
metrics:
- type: rouge
name: Rouge
---
# Dataset Card for "billsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/FiscalNote/BillSum](https://github.com/FiscalNote/BillSum)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://aclanthology.org/D19-5406/
- **Paper:** https://arxiv.org/abs/1910.00523
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 67.26 MB
- **Size of the generated dataset:** 272.42 MB
- **Total amount of disk used:** 339.68 MB
### Dataset Summary
BillSum, summarization of US Congressional and California state bills.
There are several features:
- text: bill text.
- summary: summary of the bills.
- title: title of the bills.
features for us bills. ca bills does not have.
- text_len: number of chars in text.
- sum_len: number of chars in summary.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 67.26 MB
- **Size of the generated dataset:** 272.42 MB
- **Total amount of disk used:** 339.68 MB
An example of 'train' looks as follows.
```
{
"summary": "some summary",
"text": "some text.",
"title": "An act to amend Section xxx."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `title`: a `string` feature.
### Data Splits
| name |train|ca_test|test|
|-------|----:|------:|---:|
|default|18949| 1237|3269|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The data consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the [Govinfo](https://github.com/unitedstates/congress) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license. The California, bills from the 2015-2016 session are available from the legislature’s [website](https://leginfo.legislature.ca.gov/).
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{kornilova-eidelman-2019-billsum,
title = "{B}ill{S}um: A Corpus for Automatic Summarization of {US} Legislation",
author = "Kornilova, Anastassia and
Eidelman, Vladimir",
editor = "Wang, Lu and
Cheung, Jackie Chi Kit and
Carenini, Giuseppe and
Liu, Fei",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5406",
doi = "10.18653/v1/D19-5406",
pages = "48--56",
eprint={1910.00523},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun) for adding this dataset. |
xlangai/spider | xlangai | 2024-03-27T13:38:42Z | 3,108 | 143 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1809.08887",
"region:us",
"text-to-sql"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: spider-1
pretty_name: Spider
tags:
- text-to-sql
dataset_info:
config_name: spider
features:
- name: db_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: query_toks
sequence: string
- name: query_toks_no_value
sequence: string
- name: question_toks
sequence: string
splits:
- name: train
num_bytes: 4743786
num_examples: 7000
- name: validation
num_bytes: 682090
num_examples: 1034
download_size: 957246
dataset_size: 5425876
configs:
- config_name: spider
data_files:
- split: train
path: spider/train-*
- split: validation
path: spider/validation-*
default: true
---
# Dataset Card for Spider
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://yale-lily.github.io/spider
- **Repository:** https://github.com/taoyds/spider
- **Paper:** https://www.aclweb.org/anthology/D18-1425/
- **Paper:** https://arxiv.org/abs/1809.08887
- **Point of Contact:** [Yale LILY](https://yale-lily.github.io/)
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students.
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
### Supported Tasks and Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
**What do the instances that comprise the dataset represent?**
Each instance is natural language question and the equivalent SQL query
**How many instances are there in total?**
**What data does each instance consist of?**
[More Information Needed]
### Data Fields
* **db_id**: Database name
* **question**: Natural language to interpret into SQL
* **query**: Target SQL query
* **query_toks**: List of tokens for the query
* **query_toks_no_value**: List of tokens for the query
* **question_toks**: List of tokens for the question
### Data Splits
**train**: 7000 questions and SQL query pairs
**dev**: 1034 question and SQL query pairs
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset was annotated by 11 college students at Yale University
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
The listed authors in the homepage are maintaining/supporting the dataset.
### Dataset Curators
[More Information Needed]
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
[More Information Needed]
### Citation Information
```
@inproceedings{yu-etal-2018-spider,
title = "{S}pider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-{SQL} Task",
author = "Yu, Tao and
Zhang, Rui and
Yang, Kai and
Yasunaga, Michihiro and
Wang, Dongxu and
Li, Zifan and
Ma, James and
Li, Irene and
Yao, Qingning and
Roman, Shanelle and
Zhang, Zilin and
Radev, Dragomir",
editor = "Riloff, Ellen and
Chiang, David and
Hockenmaier, Julia and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D18-1425",
doi = "10.18653/v1/D18-1425",
pages = "3911--3921",
archivePrefix={arXiv},
eprint={1809.08887},
primaryClass={cs.CL},
}
```
### Contributions
Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset. |
ctheodoris/Genecorpus-30M | ctheodoris | 2024-03-25T23:01:49Z | 2,190 | 71 | [
"license:apache-2.0",
"region:us"
] | [] | 2022-03-12T21:21:46Z | 1 | ---
license: apache-2.0
---
# Dataset Card for Genecorpus-30M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Species](#species)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
<!---
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
--->
## Dataset Description
<!--- **Paper:**
--->
- **Point of Contact:** [email protected]
### Dataset Summary
We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
See [our manuscript](https://rdcu.be/ddrx0) for details.
### Supported Tasks
This corpus was used for pretraining [Geneformer](https://rdcu.be/ddrx0) and is compatible with pretraining or fine-tuning Geneformer or similar models.
### Species
Homo sapiens
## Dataset Structure
### Data Instances
Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.
To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.
The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).
### Data Fields
- `input_ids`: rank value encoding for an example cell
- `lengths`: length of rank value encoding for that example cell
### Data Splits
The dataset does not contain any predefined splits.
## Dataset Creation
### Curation Rationale
Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
### Source Data
#### Initial Data Collection and Normalization
Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in [Data Instances](#data-instances).
#### Who are the source data producers?
Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, Refine.bio, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.
### Annotations
#### Annotation process
Geneformer-30M does not contain annotations.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.
## Considerations for Using the Data
### Social Impact of Dataset
Genecorpus-30M enabled the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.
### Discussion of Biases
We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.
### Other Known Limitations
Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.
## Additional Information
### Dataset Curators
Christina Theodoris, MD, PhD
### Citation Information
Theodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print.
(*co-corresponding authors)
<!--- ### Licensing Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
---> |
alvations/c4p0-x1-de-en | alvations | 2024-03-24T03:55:50Z | 11,110 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-03-23T10:16:43Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 32165
num_examples: 31
download_size: 22249
dataset_size: 32165
configs:
- config_name: default
data_files:
- split: train
path: c0d4dc8660289947/train-*
---
|
alvations/c4p0-x1-en-ja | alvations | 2024-03-24T03:55:23Z | 23,170 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-03-23T09:54:37Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 49764
num_examples: 42
download_size: 37636
dataset_size: 49764
configs:
- config_name: default
data_files:
- split: train
path: 66034f82c5c65ae4/train-*
---
|
bigbio/pubmed_qa | bigbio | 2024-03-23T19:06:35Z | 4,483 | 46 | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | [] | 2022-11-13T22:11:45Z | 2 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: PubMedQA
homepage: https://github.com/pubmedqa/pubmedqa
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for PubMedQA
## Dataset Description
- **Homepage:** https://github.com/pubmedqa/pubmedqa
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
## Citation Information
```
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
|
open-llm-leaderboard-old/details_gpt2 | open-llm-leaderboard-old | 2024-03-23T06:19:40Z | 19,921 | 0 | [
"region:us"
] | [] | 2023-08-18T14:35:26Z | null | ---
pretty_name: Evaluation run of gpt2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [gpt2](https://huggingface.co/gpt2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 65 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 25 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_gpt2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-23T06:18:16.565546](https://huggingface.co/datasets/open-llm-leaderboard/details_gpt2/blob/main/results_2024-03-23T06-18-16.565546.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.25780579051672486,\n\
\ \"acc_stderr\": 0.030658881019520554,\n \"acc_norm\": 0.2586547713391113,\n\
\ \"acc_norm_stderr\": 0.031431381356225356,\n \"mc1\": 0.22766217870257038,\n\
\ \"mc1_stderr\": 0.01467925503211107,\n \"mc2\": 0.4069116400376613,\n\
\ \"mc2_stderr\": 0.014934250122346554\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.197098976109215,\n \"acc_stderr\": 0.011625047669880633,\n\
\ \"acc_norm\": 0.22013651877133106,\n \"acc_norm_stderr\": 0.01210812488346097\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.29267078271260705,\n\
\ \"acc_stderr\": 0.004540586983229993,\n \"acc_norm\": 0.3152758414658435,\n\
\ \"acc_norm_stderr\": 0.0046367607625228515\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.22962962962962963,\n\
\ \"acc_stderr\": 0.03633384414073462,\n \"acc_norm\": 0.22962962962962963,\n\
\ \"acc_norm_stderr\": 0.03633384414073462\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.16447368421052633,\n \"acc_stderr\": 0.0301675334686327,\n\
\ \"acc_norm\": 0.16447368421052633,\n \"acc_norm_stderr\": 0.0301675334686327\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.17,\n\
\ \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.17,\n \
\ \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.24150943396226415,\n \"acc_stderr\": 0.026341480371118345,\n\
\ \"acc_norm\": 0.24150943396226415,\n \"acc_norm_stderr\": 0.026341480371118345\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.28,\n\
\ \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.24277456647398843,\n\
\ \"acc_stderr\": 0.0326926380614177,\n \"acc_norm\": 0.24277456647398843,\n\
\ \"acc_norm_stderr\": 0.0326926380614177\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.043364327079931785,\n\
\ \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.043364327079931785\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.16,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.16,\n\
\ \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2723404255319149,\n \"acc_stderr\": 0.029101290698386698,\n\
\ \"acc_norm\": 0.2723404255319149,\n \"acc_norm_stderr\": 0.029101290698386698\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n\
\ \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.25396825396825395,\n \"acc_stderr\": 0.022418042891113942,\n \"\
acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.022418042891113942\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.14285714285714285,\n\
\ \"acc_stderr\": 0.0312984318574381,\n \"acc_norm\": 0.14285714285714285,\n\
\ \"acc_norm_stderr\": 0.0312984318574381\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.15,\n \"acc_stderr\": 0.035887028128263686,\n \
\ \"acc_norm\": 0.15,\n \"acc_norm_stderr\": 0.035887028128263686\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.2967741935483871,\n \"acc_stderr\": 0.025988500792411894,\n \"\
acc_norm\": 0.2967741935483871,\n \"acc_norm_stderr\": 0.025988500792411894\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.270935960591133,\n \"acc_stderr\": 0.03127090713297698,\n \"acc_norm\"\
: 0.270935960591133,\n \"acc_norm_stderr\": 0.03127090713297698\n },\n\
\ \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\"\
: 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n\
\ \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"\
acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.36787564766839376,\n \"acc_stderr\": 0.03480175668466036,\n\
\ \"acc_norm\": 0.36787564766839376,\n \"acc_norm_stderr\": 0.03480175668466036\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2717948717948718,\n \"acc_stderr\": 0.022556551010132358,\n\
\ \"acc_norm\": 0.2717948717948718,\n \"acc_norm_stderr\": 0.022556551010132358\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.26296296296296295,\n \"acc_stderr\": 0.026842057873833706,\n \
\ \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.026842057873833706\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.28991596638655465,\n \"acc_stderr\": 0.029472485833136098,\n\
\ \"acc_norm\": 0.28991596638655465,\n \"acc_norm_stderr\": 0.029472485833136098\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.271523178807947,\n \"acc_stderr\": 0.03631329803969654,\n \"acc_norm\"\
: 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969654\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3486238532110092,\n\
\ \"acc_stderr\": 0.020431254090714328,\n \"acc_norm\": 0.3486238532110092,\n\
\ \"acc_norm_stderr\": 0.020431254090714328\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n\
\ \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.24472573839662448,\n \"acc_stderr\": 0.027985699387036416,\n\
\ \"acc_norm\": 0.24472573839662448,\n \"acc_norm_stderr\": 0.027985699387036416\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.2914798206278027,\n\
\ \"acc_stderr\": 0.030500283176545923,\n \"acc_norm\": 0.2914798206278027,\n\
\ \"acc_norm_stderr\": 0.030500283176545923\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.26717557251908397,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.26717557251908397,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.32231404958677684,\n \"acc_stderr\": 0.04266416363352168,\n \"\
acc_norm\": 0.32231404958677684,\n \"acc_norm_stderr\": 0.04266416363352168\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n\
\ \"acc_stderr\": 0.03957835471980981,\n \"acc_norm\": 0.21296296296296297,\n\
\ \"acc_norm_stderr\": 0.03957835471980981\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.26380368098159507,\n \"acc_stderr\": 0.03462419931615623,\n\
\ \"acc_norm\": 0.26380368098159507,\n \"acc_norm_stderr\": 0.03462419931615623\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.25892857142857145,\n\
\ \"acc_stderr\": 0.041577515398656284,\n \"acc_norm\": 0.25892857142857145,\n\
\ \"acc_norm_stderr\": 0.041577515398656284\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.34951456310679613,\n \"acc_stderr\": 0.04721188506097173,\n\
\ \"acc_norm\": 0.34951456310679613,\n \"acc_norm_stderr\": 0.04721188506097173\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.1794871794871795,\n\
\ \"acc_stderr\": 0.025140935950335418,\n \"acc_norm\": 0.1794871794871795,\n\
\ \"acc_norm_stderr\": 0.025140935950335418\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.21583652618135377,\n\
\ \"acc_stderr\": 0.014711684386139958,\n \"acc_norm\": 0.21583652618135377,\n\
\ \"acc_norm_stderr\": 0.014711684386139958\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24277456647398843,\n \"acc_stderr\": 0.0230836585869842,\n\
\ \"acc_norm\": 0.24277456647398843,\n \"acc_norm_stderr\": 0.0230836585869842\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.21895424836601307,\n \"acc_stderr\": 0.02367908986180772,\n\
\ \"acc_norm\": 0.21895424836601307,\n \"acc_norm_stderr\": 0.02367908986180772\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.24758842443729903,\n\
\ \"acc_stderr\": 0.024513879973621967,\n \"acc_norm\": 0.24758842443729903,\n\
\ \"acc_norm_stderr\": 0.024513879973621967\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.22530864197530864,\n \"acc_stderr\": 0.023246202647819746,\n\
\ \"acc_norm\": 0.22530864197530864,\n \"acc_norm_stderr\": 0.023246202647819746\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.26595744680851063,\n \"acc_stderr\": 0.026358065698880592,\n \
\ \"acc_norm\": 0.26595744680851063,\n \"acc_norm_stderr\": 0.026358065698880592\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n\
\ \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n\
\ \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.030187532060329376,\n\
\ \"acc_norm\": 0.44485294117647056,\n \"acc_norm_stderr\": 0.030187532060329376\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.26143790849673204,\n \"acc_stderr\": 0.017776947157528034,\n \
\ \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.017776947157528034\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n\
\ \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n\
\ \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.031362502409358936,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.031362502409358936\n \
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.22885572139303484,\n\
\ \"acc_stderr\": 0.029705284056772426,\n \"acc_norm\": 0.22885572139303484,\n\
\ \"acc_norm_stderr\": 0.029705284056772426\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384739,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384739\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.1927710843373494,\n\
\ \"acc_stderr\": 0.030709824050565274,\n \"acc_norm\": 0.1927710843373494,\n\
\ \"acc_norm_stderr\": 0.030709824050565274\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.0312678171466318,\n\
\ \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.0312678171466318\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22766217870257038,\n\
\ \"mc1_stderr\": 0.01467925503211107,\n \"mc2\": 0.4069116400376613,\n\
\ \"mc2_stderr\": 0.014934250122346554\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5043409629044988,\n \"acc_stderr\": 0.014051956064076887\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \
\ \"acc_stderr\": 0.0022675371022544736\n }\n}\n```"
repo_url: https://huggingface.co/gpt2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: [email protected]
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|arc:challenge|25_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|arc:challenge|25_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|arc:challenge|25_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|arc:challenge|25_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|arc:challenge|25_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|arc:challenge|25_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|arc:challenge|25_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|arc:challenge|25_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_drop_0
data_files:
- split: 2023_09_14T13_54_21.687636
path:
- '**/details_harness|drop|0_2023-09-14T13-54-21.687636.parquet'
- split: 2023_09_15T12_28_23.937147
path:
- '**/details_harness|drop|0_2023-09-15T12-28-23.937147.parquet'
- split: 2023_09_15T12_47_31.231445
path:
- '**/details_harness|drop|0_2023-09-15T12-47-31.231445.parquet'
- split: latest
path:
- '**/details_harness|drop|0_2023-09-15T12-47-31.231445.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|drop|3_2023-11-21T18-07-07.067275.parquet'
- split: 2023_11_29T12_47_35.686694
path:
- '**/details_harness|drop|3_2023-11-29T12-47-35.686694.parquet'
- split: 2023_11_29T12_58_42.860611
path:
- '**/details_harness|drop|3_2023-11-29T12-58-42.860611.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-29T12-58-42.860611.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|gsm8k|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_11_29T12_47_35.686694
path:
- '**/details_harness|gsm8k|5_2023-11-29T12-47-35.686694.parquet'
- split: 2023_11_29T12_58_42.860611
path:
- '**/details_harness|gsm8k|5_2023-11-29T12-58-42.860611.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|gsm8k|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|gsm8k|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|gsm8k|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|gsm8k|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|gsm8k|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|gsm8k|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|gsm8k|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hellaswag|10_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hellaswag|10_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hellaswag|10_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hellaswag|10_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hellaswag|10_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hellaswag|10_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hellaswag|10_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hellaswag|10_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-21T18-07-07.067275.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-16T13-32-55.332102.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-19T14-19-42.718116.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T15-28-59.872701.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-10T14-42-55.873500.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T14-12-21.064569.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T13-56-20.291666.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T06-18-16.565546.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-21T18-07-07.067275.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T06-18-16.565546.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_06T15_19_52.414673
path:
- '**/details_harness|winogrande|5_2023-09-06T15-19-52.414673.parquet'
- split: 2023_09_06T15_22_24.734466
path:
- '**/details_harness|winogrande|5_2023-09-06T15-22-24.734466.parquet'
- split: 2023_09_06T15_24_04.768979
path:
- '**/details_harness|winogrande|5_2023-09-06T15-24-04.768979.parquet'
- split: 2023_09_07T12_01_51.839651
path:
- '**/details_harness|winogrande|5_2023-09-07T12-01-51.839651.parquet'
- split: 2023_09_07T12_04_01.189528
path:
- '**/details_harness|winogrande|5_2023-09-07T12-04-01.189528.parquet'
- split: 2023_09_07T12_08_17.821371
path:
- '**/details_harness|winogrande|5_2023-09-07T12-08-17.821371.parquet'
- split: 2023_09_07T12_10_30.286469
path:
- '**/details_harness|winogrande|5_2023-09-07T12-10-30.286469.parquet'
- split: 2023_11_21T18_07_07.067275
path:
- '**/details_harness|winogrande|5_2023-11-21T18-07-07.067275.parquet'
- split: 2023_11_29T12_47_35.686694
path:
- '**/details_harness|winogrande|5_2023-11-29T12-47-35.686694.parquet'
- split: 2023_11_29T12_58_42.860611
path:
- '**/details_harness|winogrande|5_2023-11-29T12-58-42.860611.parquet'
- split: 2023_12_16T13_32_55.332102
path:
- '**/details_harness|winogrande|5_2023-12-16T13-32-55.332102.parquet'
- split: 2023_12_19T14_19_42.718116
path:
- '**/details_harness|winogrande|5_2023-12-19T14-19-42.718116.parquet'
- split: 2023_12_23T15_28_59.872701
path:
- '**/details_harness|winogrande|5_2023-12-23T15-28-59.872701.parquet'
- split: 2024_01_10T14_42_55.873500
path:
- '**/details_harness|winogrande|5_2024-01-10T14-42-55.873500.parquet'
- split: 2024_01_18T14_12_21.064569
path:
- '**/details_harness|winogrande|5_2024-01-18T14-12-21.064569.parquet'
- split: 2024_01_22T13_56_20.291666
path:
- '**/details_harness|winogrande|5_2024-01-22T13-56-20.291666.parquet'
- split: 2024_03_23T06_18_16.565546
path:
- '**/details_harness|winogrande|5_2024-03-23T06-18-16.565546.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-23T06-18-16.565546.parquet'
- config_name: results
data_files:
- split: 2023_09_06T12_19_07.283399
path:
- results_2023-09-06T12-19-07.283399.parquet
- split: 2023_09_06T12_21_24.071294
path:
- results_2023-09-06T12-21-24.071294.parquet
- split: 2023_09_06T12_24_13.323279
path:
- results_2023-09-06T12-24-13.323279.parquet
- split: 2023_09_06T13_26_17.619860
path:
- results_2023-09-06T13-26-17.619860.parquet
- split: 2023_09_06T15_15_44.379880
path:
- results_2023-09-06T15-15-44.379880.parquet
- split: 2023_09_06T15_19_52.414673
path:
- results_2023-09-06T15-19-52.414673.parquet
- split: 2023_09_06T15_22_24.734466
path:
- results_2023-09-06T15-22-24.734466.parquet
- split: 2023_09_06T15_24_04.768979
path:
- results_2023-09-06T15-24-04.768979.parquet
- split: 2023_09_07T12_01_51.839651
path:
- results_2023-09-07T12-01-51.839651.parquet
- split: 2023_09_07T12_04_01.189528
path:
- results_2023-09-07T12-04-01.189528.parquet
- split: 2023_09_07T12_08_17.821371
path:
- results_2023-09-07T12-08-17.821371.parquet
- split: 2023_09_07T12_10_30.286469
path:
- results_2023-09-07T12-10-30.286469.parquet
- split: 2023_09_14T13_54_21.687636
path:
- results_2023-09-14T13-54-21.687636.parquet
- split: 2023_09_15T12_28_23.937147
path:
- results_2023-09-15T12-28-23.937147.parquet
- split: 2023_09_15T12_47_31.231445
path:
- results_2023-09-15T12-47-31.231445.parquet
- split: 2023_11_21T18_07_07.067275
path:
- results_2023-11-21T18-07-07.067275.parquet
- split: 2023_11_29T12_47_35.686694
path:
- results_2023-11-29T12-47-35.686694.parquet
- split: 2023_11_29T12_58_42.860611
path:
- results_2023-11-29T12-58-42.860611.parquet
- split: 2023_12_16T13_32_55.332102
path:
- results_2023-12-16T13-32-55.332102.parquet
- split: 2023_12_19T14_19_42.718116
path:
- results_2023-12-19T14-19-42.718116.parquet
- split: 2023_12_23T15_28_59.872701
path:
- results_2023-12-23T15-28-59.872701.parquet
- split: 2024_01_10T14_42_55.873500
path:
- results_2024-01-10T14-42-55.873500.parquet
- split: 2024_01_18T14_12_21.064569
path:
- results_2024-01-18T14-12-21.064569.parquet
- split: 2024_01_22T13_56_20.291666
path:
- results_2024-01-22T13-56-20.291666.parquet
- split: 2024_03_23T06_18_16.565546
path:
- results_2024-03-23T06-18-16.565546.parquet
- split: latest
path:
- results_2024-03-23T06-18-16.565546.parquet
---
# Dataset Card for Evaluation run of gpt2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [gpt2](https://huggingface.co/gpt2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 65 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 25 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_gpt2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-23T06:18:16.565546](https://huggingface.co/datasets/open-llm-leaderboard/details_gpt2/blob/main/results_2024-03-23T06-18-16.565546.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.25780579051672486,
"acc_stderr": 0.030658881019520554,
"acc_norm": 0.2586547713391113,
"acc_norm_stderr": 0.031431381356225356,
"mc1": 0.22766217870257038,
"mc1_stderr": 0.01467925503211107,
"mc2": 0.4069116400376613,
"mc2_stderr": 0.014934250122346554
},
"harness|arc:challenge|25": {
"acc": 0.197098976109215,
"acc_stderr": 0.011625047669880633,
"acc_norm": 0.22013651877133106,
"acc_norm_stderr": 0.01210812488346097
},
"harness|hellaswag|10": {
"acc": 0.29267078271260705,
"acc_stderr": 0.004540586983229993,
"acc_norm": 0.3152758414658435,
"acc_norm_stderr": 0.0046367607625228515
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.22962962962962963,
"acc_stderr": 0.03633384414073462,
"acc_norm": 0.22962962962962963,
"acc_norm_stderr": 0.03633384414073462
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.16447368421052633,
"acc_stderr": 0.0301675334686327,
"acc_norm": 0.16447368421052633,
"acc_norm_stderr": 0.0301675334686327
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.17,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.17,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.24150943396226415,
"acc_stderr": 0.026341480371118345,
"acc_norm": 0.24150943396226415,
"acc_norm_stderr": 0.026341480371118345
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.24277456647398843,
"acc_stderr": 0.0326926380614177,
"acc_norm": 0.24277456647398843,
"acc_norm_stderr": 0.0326926380614177
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2549019607843137,
"acc_stderr": 0.043364327079931785,
"acc_norm": 0.2549019607843137,
"acc_norm_stderr": 0.043364327079931785
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.16,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.16,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2723404255319149,
"acc_stderr": 0.029101290698386698,
"acc_norm": 0.2723404255319149,
"acc_norm_stderr": 0.029101290698386698
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.041424397194893624,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.041424397194893624
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.022418042891113942,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.022418042891113942
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.14285714285714285,
"acc_stderr": 0.0312984318574381,
"acc_norm": 0.14285714285714285,
"acc_norm_stderr": 0.0312984318574381
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.15,
"acc_stderr": 0.035887028128263686,
"acc_norm": 0.15,
"acc_norm_stderr": 0.035887028128263686
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2967741935483871,
"acc_stderr": 0.025988500792411894,
"acc_norm": 0.2967741935483871,
"acc_norm_stderr": 0.025988500792411894
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.03127090713297698,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.03127090713297698
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.35353535353535354,
"acc_stderr": 0.03406086723547153,
"acc_norm": 0.35353535353535354,
"acc_norm_stderr": 0.03406086723547153
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.36787564766839376,
"acc_stderr": 0.03480175668466036,
"acc_norm": 0.36787564766839376,
"acc_norm_stderr": 0.03480175668466036
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2717948717948718,
"acc_stderr": 0.022556551010132358,
"acc_norm": 0.2717948717948718,
"acc_norm_stderr": 0.022556551010132358
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.026842057873833706,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.026842057873833706
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.28991596638655465,
"acc_stderr": 0.029472485833136098,
"acc_norm": 0.28991596638655465,
"acc_norm_stderr": 0.029472485833136098
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.03631329803969654,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.03631329803969654
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3486238532110092,
"acc_stderr": 0.020431254090714328,
"acc_norm": 0.3486238532110092,
"acc_norm_stderr": 0.020431254090714328
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.24472573839662448,
"acc_stderr": 0.027985699387036416,
"acc_norm": 0.24472573839662448,
"acc_norm_stderr": 0.027985699387036416
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.2914798206278027,
"acc_stderr": 0.030500283176545923,
"acc_norm": 0.2914798206278027,
"acc_norm_stderr": 0.030500283176545923
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.26717557251908397,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.26717557251908397,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.32231404958677684,
"acc_stderr": 0.04266416363352168,
"acc_norm": 0.32231404958677684,
"acc_norm_stderr": 0.04266416363352168
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.03957835471980981,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.03957835471980981
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.26380368098159507,
"acc_stderr": 0.03462419931615623,
"acc_norm": 0.26380368098159507,
"acc_norm_stderr": 0.03462419931615623
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.25892857142857145,
"acc_stderr": 0.041577515398656284,
"acc_norm": 0.25892857142857145,
"acc_norm_stderr": 0.041577515398656284
},
"harness|hendrycksTest-management|5": {
"acc": 0.34951456310679613,
"acc_stderr": 0.04721188506097173,
"acc_norm": 0.34951456310679613,
"acc_norm_stderr": 0.04721188506097173
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.1794871794871795,
"acc_stderr": 0.025140935950335418,
"acc_norm": 0.1794871794871795,
"acc_norm_stderr": 0.025140935950335418
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.21583652618135377,
"acc_stderr": 0.014711684386139958,
"acc_norm": 0.21583652618135377,
"acc_norm_stderr": 0.014711684386139958
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24277456647398843,
"acc_stderr": 0.0230836585869842,
"acc_norm": 0.24277456647398843,
"acc_norm_stderr": 0.0230836585869842
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.21895424836601307,
"acc_stderr": 0.02367908986180772,
"acc_norm": 0.21895424836601307,
"acc_norm_stderr": 0.02367908986180772
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.24758842443729903,
"acc_stderr": 0.024513879973621967,
"acc_norm": 0.24758842443729903,
"acc_norm_stderr": 0.024513879973621967
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.22530864197530864,
"acc_stderr": 0.023246202647819746,
"acc_norm": 0.22530864197530864,
"acc_norm_stderr": 0.023246202647819746
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.26595744680851063,
"acc_stderr": 0.026358065698880592,
"acc_norm": 0.26595744680851063,
"acc_norm_stderr": 0.026358065698880592
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.44485294117647056,
"acc_stderr": 0.030187532060329376,
"acc_norm": 0.44485294117647056,
"acc_norm_stderr": 0.030187532060329376
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.017776947157528034,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.017776947157528034
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.4,
"acc_stderr": 0.031362502409358936,
"acc_norm": 0.4,
"acc_norm_stderr": 0.031362502409358936
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.22885572139303484,
"acc_stderr": 0.029705284056772426,
"acc_norm": 0.22885572139303484,
"acc_norm_stderr": 0.029705284056772426
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-virology|5": {
"acc": 0.1927710843373494,
"acc_stderr": 0.030709824050565274,
"acc_norm": 0.1927710843373494,
"acc_norm_stderr": 0.030709824050565274
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.0312678171466318,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.0312678171466318
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22766217870257038,
"mc1_stderr": 0.01467925503211107,
"mc2": 0.4069116400376613,
"mc2_stderr": 0.014934250122346554
},
"harness|winogrande|5": {
"acc": 0.5043409629044988,
"acc_stderr": 0.014051956064076887
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022544736
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
alvations/c4p0 | alvations | 2024-03-23T01:26:11Z | 96,896 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-03-22T00:58:02Z | null | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 4134
num_examples: 3
download_size: 19374
dataset_size: 4134
configs:
- config_name: default
data_files:
- split: train
path: f2527aa0a4051632/train-*
---
|
cornell-movie-review-data/rotten_tomatoes | cornell-movie-review-data | 2024-03-18T14:28:45Z | 19,832 | 82 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 2 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: mr
pretty_name: RottenTomatoes - MR Movie Review Data
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
splits:
- name: train
num_bytes: 1074810
num_examples: 8530
- name: validation
num_bytes: 134679
num_examples: 1066
- name: test
num_bytes: 135972
num_examples: 1066
download_size: 487770
dataset_size: 1345461
train-eval-index:
- config: default
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1
args:
average: binary
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "rotten_tomatoes"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cornell.edu/people/pabo/movie-review-data/](http://www.cs.cornell.edu/people/pabo/movie-review-data/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [https://arxiv.org/abs/cs/0506075](https://arxiv.org/abs/cs/0506075)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 1.34 MB
- **Total amount of disk used:** 1.84 MB
### Dataset Summary
Movie Review Dataset.
This is a dataset of containing 5,331 positive and 5,331 negative processed
sentences from Rotten Tomatoes movie reviews. This data was first used in Bo
Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for
sentiment categorization with respect to rating scales.'', Proceedings of the
ACL, 2005.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 1.34 MB
- **Total amount of disk used:** 1.84 MB
An example of 'validation' looks as follows.
```
{
"label": 1,
"text": "Sometimes the days and nights just drag on -- it 's the morning that make me feel alive . And I have one thing to thank for that : pancakes . "
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
### Data Splits
Reads Rotten Tomatoes sentences and splits into 80% train, 10% validation, and 10% test, as is the practice set out in
Jinfeng Li, ``TEXTBUGGER: Generating Adversarial Text Against Real-world Applications.''
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 8530| 1066|1066|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{Pang+Lee:05a,
author = {Bo Pang and Lillian Lee},
title = {Seeing stars: Exploiting class relationships for sentiment
categorization with respect to rating scales},
booktitle = {Proceedings of the ACL},
year = 2005
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jxmorris12](https://github.com/jxmorris12) for adding this dataset. |
czyzi0/the-mc-speech-dataset | czyzi0 | 2024-03-16T15:30:05Z | 2,199 | 5 | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"language:pl",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-speech",
"automatic-speech-recognition"
] | 2023-07-03T19:31:36Z | 2 | ---
language:
- pl
license: cc0-1.0
size_categories:
- 10K<n<100K
task_categories:
- text-to-speech
- automatic-speech-recognition
pretty_name: The MC Speech Dataset
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 44100
- name: transcript
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 6985316587.668
num_examples: 24018
download_size: 6174661195
dataset_size: 6985316587.668
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is public domain speech dataset consisting of 24018 short audio clips of a single speaker reading sentences in Polish. A transcription is provided for each clip. Clips have total length of more than 22 hours.
Texts are in public domain. The audio was recorded in 2021-22 as a part of my [master's thesis](http://dx.doi.org/10.13140/RG.2.2.26293.24800) and is in public domain.
If you use this dataset, please cite:
```
@masterthesis{mcspeech,
title={Analiza porównawcza korpusów nagrań mowy dla celów syntezy mowy w języku polskim},
author={Czyżnikiewicz, Mateusz},
year={2022},
month={December},
school={Warsaw University of Technology},
type={Master's thesis},
doi={10.13140/RG.2.2.26293.24800},
note={Available at \url{http://dx.doi.org/10.13140/RG.2.2.26293.24800}},
}
```
More info about the dataset can be found at https://github.com/czyzi0/the-mc-speech-dataset
Also, if you find this resource helpful, kindly consider leaving a like. |
google/wiki40b | google | 2024-03-11T16:19:48Z | 8,616 | 28 | [
"language:en",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
paperswithcode_id: wiki-40b
pretty_name: Wiki-40B
dataset_info:
- config_name: ar
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 773508885
num_examples: 220885
- name: validation
num_bytes: 44102674
num_examples: 12198
- name: test
num_bytes: 43755879
num_examples: 12271
download_size: 413683528
dataset_size: 861367438
- config_name: bg
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1413477231
num_examples: 130670
- name: validation
num_bytes: 78976448
num_examples: 7259
- name: test
num_bytes: 78350414
num_examples: 7289
download_size: 484828696
dataset_size: 1570804093
- config_name: ca
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 784791826
num_examples: 277313
- name: validation
num_bytes: 43576907
num_examples: 15362
- name: test
num_bytes: 44904134
num_examples: 15568
download_size: 480954417
dataset_size: 873272867
- config_name: cs
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 901187017
num_examples: 235971
- name: validation
num_bytes: 49743998
num_examples: 13096
- name: test
num_bytes: 49325867
num_examples: 12984
download_size: 493522926
dataset_size: 1000256882
- config_name: da
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 247928023
num_examples: 109486
- name: validation
num_bytes: 13937406
num_examples: 6173
- name: test
num_bytes: 14401179
num_examples: 6219
download_size: 156696617
dataset_size: 276266608
- config_name: de
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 4988094268
num_examples: 1554910
- name: validation
num_bytes: 278101948
num_examples: 86068
- name: test
num_bytes: 278024815
num_examples: 86594
download_size: 3174352286
dataset_size: 5544221031
- config_name: el
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1738534924
num_examples: 93596
- name: validation
num_bytes: 97711791
num_examples: 5130
- name: test
num_bytes: 99743744
num_examples: 5261
download_size: 621575577
dataset_size: 1935990459
- config_name: en
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 9423468036
num_examples: 2926536
- name: validation
num_bytes: 527374301
num_examples: 163597
- name: test
num_bytes: 522210646
num_examples: 162274
download_size: 6183831905
dataset_size: 10473052983
- config_name: es
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 2906242601
num_examples: 872541
- name: validation
num_bytes: 161381260
num_examples: 48592
- name: test
num_bytes: 164110964
num_examples: 48764
download_size: 1783120767
dataset_size: 3231734825
- config_name: et
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 196484412
num_examples: 114464
- name: validation
num_bytes: 10987144
num_examples: 6351
- name: test
num_bytes: 10691693
num_examples: 6205
download_size: 122192870
dataset_size: 218163249
- config_name: fa
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1551260324
num_examples: 203145
- name: validation
num_bytes: 86108146
num_examples: 11180
- name: test
num_bytes: 89064531
num_examples: 11262
download_size: 552712695
dataset_size: 1726433001
- config_name: fi
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 589614484
num_examples: 255822
- name: validation
num_bytes: 32645294
num_examples: 13962
- name: test
num_bytes: 32869383
num_examples: 14179
download_size: 346601923
dataset_size: 655129161
- config_name: fr
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 3850031120
num_examples: 1227206
- name: validation
num_bytes: 216405364
num_examples: 68655
- name: test
num_bytes: 215243874
num_examples: 68004
download_size: 2246390244
dataset_size: 4281680358
- config_name: he
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 2834322770
num_examples: 165359
- name: validation
num_bytes: 160235180
num_examples: 9231
- name: test
num_bytes: 162131949
num_examples: 9344
download_size: 754632129
dataset_size: 3156689899
- config_name: hi
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 947403521
num_examples: 45737
- name: validation
num_bytes: 54497912
num_examples: 2596
- name: test
num_bytes: 54448878
num_examples: 2643
download_size: 231716300
dataset_size: 1056350311
- config_name: hr
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 247471855
num_examples: 103857
- name: validation
num_bytes: 14004242
num_examples: 5792
- name: test
num_bytes: 13881533
num_examples: 5724
download_size: 158644264
dataset_size: 275357630
- config_name: hu
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 878753014
num_examples: 273248
- name: validation
num_bytes: 48695962
num_examples: 15208
- name: test
num_bytes: 50053050
num_examples: 15258
download_size: 466524744
dataset_size: 977502026
- config_name: id
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 315092853
num_examples: 156255
- name: validation
num_bytes: 16667760
num_examples: 8714
- name: test
num_bytes: 17798713
num_examples: 8598
download_size: 193455048
dataset_size: 349559326
- config_name: it
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1998187938
num_examples: 732609
- name: validation
num_bytes: 109399796
num_examples: 40684
- name: test
num_bytes: 108160871
num_examples: 40443
download_size: 1330554944
dataset_size: 2215748605
- config_name: ja
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 7719156890
num_examples: 745392
- name: validation
num_bytes: 423396781
num_examples: 41576
- name: test
num_bytes: 424775191
num_examples: 41268
download_size: 2914923230
dataset_size: 8567328862
- config_name: ko
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1424423053
num_examples: 194977
- name: validation
num_bytes: 79027067
num_examples: 10805
- name: test
num_bytes: 78623281
num_examples: 10802
download_size: 568560655
dataset_size: 1582073401
- config_name: lt
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 173899806
num_examples: 84854
- name: validation
num_bytes: 9782794
num_examples: 4754
- name: test
num_bytes: 9855094
num_examples: 4683
download_size: 100457919
dataset_size: 193537694
- config_name: lv
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 108022486
num_examples: 33064
- name: validation
num_bytes: 5999880
num_examples: 1857
- name: test
num_bytes: 6277058
num_examples: 1932
download_size: 57147319
dataset_size: 120299424
- config_name: ms
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 133193449
num_examples: 97509
- name: validation
num_bytes: 7244722
num_examples: 5357
- name: test
num_bytes: 7344948
num_examples: 5235
download_size: 80629019
dataset_size: 147783119
- config_name: nl
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 906908479
num_examples: 447555
- name: validation
num_bytes: 51519150
num_examples: 25201
- name: test
num_bytes: 49492508
num_examples: 24776
download_size: 594312303
dataset_size: 1007920137
- config_name: 'no'
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 391905155
num_examples: 190588
- name: validation
num_bytes: 22058565
num_examples: 10547
- name: test
num_bytes: 21510187
num_examples: 10588
download_size: 248974000
dataset_size: 435473907
- config_name: pl
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1250270240
num_examples: 505191
- name: validation
num_bytes: 70048390
num_examples: 28310
- name: test
num_bytes: 69957343
num_examples: 27987
download_size: 755556434
dataset_size: 1390275973
- config_name: pt
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1186541609
num_examples: 406507
- name: validation
num_bytes: 65911750
num_examples: 22301
- name: test
num_bytes: 65941634
num_examples: 22693
download_size: 725984914
dataset_size: 1318394993
- config_name: ro
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 378177460
num_examples: 139615
- name: validation
num_bytes: 19638614
num_examples: 7624
- name: test
num_bytes: 22095957
num_examples: 7870
download_size: 212621695
dataset_size: 419912031
- config_name: ru
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 14041955183
num_examples: 926037
- name: validation
num_bytes: 787569099
num_examples: 51287
- name: test
num_bytes: 782630173
num_examples: 51885
download_size: 4959684748
dataset_size: 15612154455
- config_name: sk
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 220400547
num_examples: 103095
- name: validation
num_bytes: 11443566
num_examples: 5604
- name: test
num_bytes: 12958230
num_examples: 5741
download_size: 122641378
dataset_size: 244802343
- config_name: sl
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 165604630
num_examples: 60927
- name: validation
num_bytes: 8686867
num_examples: 3287
- name: test
num_bytes: 8938235
num_examples: 3341
download_size: 108369067
dataset_size: 183229732
- config_name: sr
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1778468133
num_examples: 327313
- name: validation
num_bytes: 101044816
num_examples: 18100
- name: test
num_bytes: 94774312
num_examples: 17997
download_size: 601515686
dataset_size: 1974287261
- config_name: sv
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 675484771
num_examples: 400742
- name: validation
num_bytes: 37596409
num_examples: 22263
- name: test
num_bytes: 37171140
num_examples: 22291
download_size: 402183416
dataset_size: 750252320
- config_name: th
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 1167742322
num_examples: 56798
- name: validation
num_bytes: 58604863
num_examples: 3093
- name: test
num_bytes: 63235795
num_examples: 3114
download_size: 286569412
dataset_size: 1289582980
- config_name: tl
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 27097474
num_examples: 25940
- name: validation
num_bytes: 1480857
num_examples: 1472
- name: test
num_bytes: 1421372
num_examples: 1446
download_size: 16610349
dataset_size: 29999703
- config_name: tr
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 417796625
num_examples: 142576
- name: validation
num_bytes: 23829728
num_examples: 7845
- name: test
num_bytes: 23573543
num_examples: 7890
download_size: 208571967
dataset_size: 465199896
- config_name: uk
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 5617333215
num_examples: 477618
- name: validation
num_bytes: 304063524
num_examples: 26324
- name: test
num_bytes: 309417358
num_examples: 26581
download_size: 2016970917
dataset_size: 6230814097
- config_name: vi
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 968448149
num_examples: 146255
- name: validation
num_bytes: 53118964
num_examples: 8195
- name: test
num_bytes: 51960729
num_examples: 7942
download_size: 382764219
dataset_size: 1073527842
- config_name: zh-cn
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 902812807
num_examples: 549672
- name: validation
num_bytes: 50487729
num_examples: 30299
- name: test
num_bytes: 49584239
num_examples: 30355
download_size: 667605463
dataset_size: 1002884775
- config_name: zh-tw
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
splits:
- name: train
num_bytes: 3254625339
num_examples: 552031
- name: validation
num_bytes: 185024571
num_examples: 30739
- name: test
num_bytes: 181148137
num_examples: 30670
download_size: 1375185673
dataset_size: 3620798047
configs:
- config_name: ar
data_files:
- split: train
path: ar/train-*
- split: validation
path: ar/validation-*
- split: test
path: ar/test-*
- config_name: bg
data_files:
- split: train
path: bg/train-*
- split: validation
path: bg/validation-*
- split: test
path: bg/test-*
- config_name: ca
data_files:
- split: train
path: ca/train-*
- split: validation
path: ca/validation-*
- split: test
path: ca/test-*
- config_name: cs
data_files:
- split: train
path: cs/train-*
- split: validation
path: cs/validation-*
- split: test
path: cs/test-*
- config_name: da
data_files:
- split: train
path: da/train-*
- split: validation
path: da/validation-*
- split: test
path: da/test-*
- config_name: de
data_files:
- split: train
path: de/train-*
- split: validation
path: de/validation-*
- split: test
path: de/test-*
- config_name: el
data_files:
- split: train
path: el/train-*
- split: validation
path: el/validation-*
- split: test
path: el/test-*
- config_name: en
data_files:
- split: train
path: en/train-*
- split: validation
path: en/validation-*
- split: test
path: en/test-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: validation
path: es/validation-*
- split: test
path: es/test-*
- config_name: et
data_files:
- split: train
path: et/train-*
- split: validation
path: et/validation-*
- split: test
path: et/test-*
- config_name: fa
data_files:
- split: train
path: fa/train-*
- split: validation
path: fa/validation-*
- split: test
path: fa/test-*
- config_name: fi
data_files:
- split: train
path: fi/train-*
- split: validation
path: fi/validation-*
- split: test
path: fi/test-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: validation
path: fr/validation-*
- split: test
path: fr/test-*
- config_name: he
data_files:
- split: train
path: he/train-*
- split: validation
path: he/validation-*
- split: test
path: he/test-*
- config_name: hi
data_files:
- split: train
path: hi/train-*
- split: validation
path: hi/validation-*
- split: test
path: hi/test-*
- config_name: hr
data_files:
- split: train
path: hr/train-*
- split: validation
path: hr/validation-*
- split: test
path: hr/test-*
- config_name: hu
data_files:
- split: train
path: hu/train-*
- split: validation
path: hu/validation-*
- split: test
path: hu/test-*
- config_name: id
data_files:
- split: train
path: id/train-*
- split: validation
path: id/validation-*
- split: test
path: id/test-*
- config_name: it
data_files:
- split: train
path: it/train-*
- split: validation
path: it/validation-*
- split: test
path: it/test-*
- config_name: ja
data_files:
- split: train
path: ja/train-*
- split: validation
path: ja/validation-*
- split: test
path: ja/test-*
- config_name: ko
data_files:
- split: train
path: ko/train-*
- split: validation
path: ko/validation-*
- split: test
path: ko/test-*
- config_name: lt
data_files:
- split: train
path: lt/train-*
- split: validation
path: lt/validation-*
- split: test
path: lt/test-*
- config_name: lv
data_files:
- split: train
path: lv/train-*
- split: validation
path: lv/validation-*
- split: test
path: lv/test-*
- config_name: ms
data_files:
- split: train
path: ms/train-*
- split: validation
path: ms/validation-*
- split: test
path: ms/test-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- split: validation
path: nl/validation-*
- split: test
path: nl/test-*
- config_name: 'no'
data_files:
- split: train
path: no/train-*
- split: validation
path: no/validation-*
- split: test
path: no/test-*
- config_name: pl
data_files:
- split: train
path: pl/train-*
- split: validation
path: pl/validation-*
- split: test
path: pl/test-*
- config_name: pt
data_files:
- split: train
path: pt/train-*
- split: validation
path: pt/validation-*
- split: test
path: pt/test-*
- config_name: ro
data_files:
- split: train
path: ro/train-*
- split: validation
path: ro/validation-*
- split: test
path: ro/test-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- split: validation
path: ru/validation-*
- split: test
path: ru/test-*
- config_name: sk
data_files:
- split: train
path: sk/train-*
- split: validation
path: sk/validation-*
- split: test
path: sk/test-*
- config_name: sl
data_files:
- split: train
path: sl/train-*
- split: validation
path: sl/validation-*
- split: test
path: sl/test-*
- config_name: sr
data_files:
- split: train
path: sr/train-*
- split: validation
path: sr/validation-*
- split: test
path: sr/test-*
- config_name: sv
data_files:
- split: train
path: sv/train-*
- split: validation
path: sv/validation-*
- split: test
path: sv/test-*
- config_name: th
data_files:
- split: train
path: th/train-*
- split: validation
path: th/validation-*
- split: test
path: th/test-*
- config_name: tl
data_files:
- split: train
path: tl/train-*
- split: validation
path: tl/validation-*
- split: test
path: tl/test-*
- config_name: tr
data_files:
- split: train
path: tr/train-*
- split: validation
path: tr/validation-*
- split: test
path: tr/test-*
- config_name: uk
data_files:
- split: train
path: uk/train-*
- split: validation
path: uk/validation-*
- split: test
path: uk/test-*
- config_name: vi
data_files:
- split: train
path: vi/train-*
- split: validation
path: vi/validation-*
- split: test
path: vi/test-*
- config_name: zh-cn
data_files:
- split: train
path: zh-cn/train-*
- split: validation
path: zh-cn/validation-*
- split: test
path: zh-cn/test-*
- config_name: zh-tw
data_files:
- split: train
path: zh-tw/train-*
- split: validation
path: zh-tw/validation-*
- split: test
path: zh-tw/test-*
---
# Dataset Card for "wiki40b"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://research.google/pubs/pub49029/](https://research.google/pubs/pub49029/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 10.47 GB
- **Total amount of disk used:** 10.47 GB
### Dataset Summary
Clean-up text for 40+ Wikipedia languages editions of pages
correspond to entities. The datasets have train/dev/test splits per language.
The dataset is cleaned up by page filtering to remove disambiguation pages,
redirect pages, deleted pages, and non-entity pages. Each example contains the
wikidata id of the entity, and the full Wikipedia article after page processing
that removes non-content sections and structured objects.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### en
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 10.47 GB
- **Total amount of disk used:** 10.47 GB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### en
- `wikidata_id`: a `string` feature.
- `text`: a `string` feature.
- `version_id`: a `string` feature.
### Data Splits
|name| train |validation| test |
|----|------:|---------:|-----:|
|en |2926536| 163597|162274|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@jplu](https://github.com/jplu), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
WizardLMTeam/WizardLM_evol_instruct_70k | WizardLMTeam | 2024-03-10T01:06:43Z | 503 | 191 | [
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2308.09583",
"arxiv:2304.12244",
"arxiv:2306.08568",
"region:us"
] | [] | 2023-04-25T09:57:27Z | null | ---
license: mit
---
This is the training data of WizardLM.
## News
- 🔥 🔥 🔥 [08/11/2023] We release **WizardMath** Models.
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
| <sup>WizardCoder-15B-V1.0</sup> | <sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
</font> |
stanfordnlp/snli | stanfordnlp | 2024-03-06T10:55:50Z | 6,934 | 75 | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended|other-flicker-30k",
"source_datasets:extended|other-visual-genome",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1508.05326",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-flicker-30k
- extended|other-visual-genome
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: snli
pretty_name: Stanford Natural Language Inference
dataset_info:
config_name: plain_text
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: test
num_bytes: 1258904
num_examples: 10000
- name: validation
num_bytes: 1263036
num_examples: 10000
- name: train
num_bytes: 65884386
num_examples: 550152
download_size: 20439300
dataset_size: 68406326
configs:
- config_name: plain_text
data_files:
- split: test
path: plain_text/test-*
- split: validation
path: plain_text/validation-*
- split: train
path: plain_text/train-*
---
# Dataset Card for SNLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/projects/snli/
- **Repository:** [More Information Needed]
- **Paper:** https://aclanthology.org/D15-1075/
- **Paper:** https://arxiv.org/abs/1508.05326
- **Leaderboard:** https://nlp.stanford.edu/projects/snli/
- **Point of Contact:** [Samuel Bowman](mailto:[email protected])
- **Point of Contact:** [Gabor Angeli](mailto:[email protected])
- **Point of Contact:** [Chris Manning]([email protected])
### Dataset Summary
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
### Supported Tasks and Leaderboards
Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is the task of determining the inference relation between two (short, ordered) texts: entailment, contradiction, or neutral ([MacCartney and Manning 2008](https://aclanthology.org/C08-1066/)).
See the [corpus webpage](https://nlp.stanford.edu/projects/snli/) for a list of published results.
### Languages
The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.
## Dataset Structure
### Data Instances
For each instance, there is a string for the premise, a string for the hypothesis, and an integer for the label. Note that each premise may appear three times with a different hypothesis and label. See the [SNLI corpus viewer](https://huggingface.co/datasets/viewer/?dataset=snli) to explore more examples.
```
{'premise': 'Two women are embracing while holding to go packages.'
'hypothesis': 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'
'label': 1}
```
The average token count for the premises and hypotheses are given below:
| Feature | Mean Token Count |
| ---------- | ---------------- |
| Premise | 14.1 |
| Hypothesis | 8.3 |
### Data Fields
- `premise`: a string used to determine the truthfulness of the hypothesis
- `hypothesis`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: an integer whose value may be either _0_, indicating that the hypothesis entails the premise, _1_, indicating that the premise and hypothesis neither entail nor contradict each other, or _2_, indicating that the hypothesis contradicts the premise. Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.
### Data Splits
The SNLI dataset has 3 splits: _train_, _validation_, and _test_. All of the examples in the _validation_ and _test_ sets come from the set that was annotated in the validation task with no-consensus examples removed. The remaining multiply-annotated examples are in the training set with no-consensus examples removed. Each unique premise/caption shows up in only one split, even though they usually appear in at least three different examples.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 550,152 |
| Validation | 10,000 |
| Test | 10,000 |
## Dataset Creation
### Curation Rationale
The [SNLI corpus (version 1.0)](https://nlp.stanford.edu/projects/snli/) was developed as a benchmark for natural langauge inference (NLI), also known as recognizing textual entailment (RTE), with the goal of producing a dataset large enough to train models using neural methodologies.
### Source Data
#### Initial Data Collection and Normalization
The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://aclanthology.org/Q14-1006/), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
#### Who are the source language producers?
A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
### Annotations
#### Annotation process
56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
| Label | Fleiss κ |
| --------------- |--------- |
| _contradiction_ | 0.77 |
| _entailment_ | 0.72 |
| _neutral_ | 0.60 |
| overall | 0.70 |
#### Who are the annotators?
The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. (It should be noted that the truth conditions of a hypothesis given a premise does not necessarily match the truth conditions of the hypothesis in the real world.) Systems that are successful at such a task may be more successful in modeling semantic representations.
### Discussion of Biases
The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://aclanthology.org/W17-1609/) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
### Other Known Limitations
[Gururangan et al (2018)](https://aclanthology.org/N18-2017/), [Poliak et al (2018)](https://aclanthology.org/S18-2023/), and [Tsuchiya (2018)](https://aclanthology.org/L18-1239/) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
## Additional Information
### Dataset Curators
The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the [Stanford NLP group](https://nlp.stanford.edu/).
It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
### Licensing Information
The Stanford Natural Language Inference Corpus by The Stanford NLP Group is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), also released under an Attribution-ShareAlike licence.
### Citation Information
The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it:
```bibtex
@inproceedings{bowman-etal-2015-large,
title = "A large annotated corpus for learning natural language inference",
author = "Bowman, Samuel R. and
Angeli, Gabor and
Potts, Christopher and
Manning, Christopher D.",
editor = "M{\`a}rquez, Llu{\'\i}s and
Callison-Burch, Chris and
Su, Jian",
booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2015",
address = "Lisbon, Portugal",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D15-1075",
doi = "10.18653/v1/D15-1075",
pages = "632--642",
}
```
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), which can be cited by way of this paper:
```bibtex
@article{young-etal-2014-image,
title = "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
author = "Young, Peter and
Lai, Alice and
Hodosh, Micah and
Hockenmaier, Julia",
editor = "Lin, Dekang and
Collins, Michael and
Lee, Lillian",
journal = "Transactions of the Association for Computational Linguistics",
volume = "2",
year = "2014",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q14-1006",
doi = "10.1162/tacl_a_00166",
pages = "67--78",
}
```
### Contact Information
For any comments or questions, please email [Samuel Bowman](mailto:[email protected]), [Gabor Angeli](mailto:[email protected]) and [Chris Manning]([email protected]).
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset. |
deepmind/narrativeqa | deepmind | 2024-03-06T07:33:05Z | 25,484 | 47 | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1712.07040",
"region:us"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
paperswithcode_id: narrativeqa
pretty_name: NarrativeQA
dataset_info:
features:
- name: document
struct:
- name: id
dtype: string
- name: kind
dtype: string
- name: url
dtype: string
- name: file_size
dtype: int32
- name: word_count
dtype: int32
- name: start
dtype: string
- name: end
dtype: string
- name: summary
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: answers
list:
- name: text
dtype: string
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 11556607782
num_examples: 32747
- name: test
num_bytes: 3547135501
num_examples: 10557
- name: validation
num_bytes: 1211859418
num_examples: 3461
download_size: 3232805701
dataset_size: 16315602701
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for Narrative QA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/deepmind/narrativeqa
- **Paper:** https://arxiv.org/abs/1712.07040
- **Paper:** https://aclanthology.org/Q18-1023/
- **Point of Contact:** [Tomáš Kočiský](mailto:[email protected]) [Jonathan Schwarz](mailto:[email protected]) [Phil Blunsom]([email protected]) [Chris Dyer]([email protected]) [Karl Moritz Hermann](mailto:[email protected]) [Gábor Melis](mailto:[email protected]) [Edward Grefenstette](mailto:[email protected])
### Dataset Summary
NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents.
### Supported Tasks and Leaderboards
The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided.
A typical example looks like this:
```
{
"document": {
"id": "23jncj2n3534563110",
"kind": "movie",
"url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html",
"file_size": 80473,
"word_count": 41000,
"start": "MOVIE screenplay by",
"end": ". THE END",
"summary": {
"text": "Joe Bloggs begins his journey exploring...",
"tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...],
"url": "http://en.wikipedia.org/wiki/Name_of_Movie",
"title": "Name of Movie (film)"
},
"text": "MOVIE screenplay by John Doe\nSCENE 1..."
},
"question": {
"text": "Where does Joe Bloggs live?",
"tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"],
},
"answers": [
{"text": "At home", "tokens": ["At", "home"]},
{"text": "His house", "tokens": ["His", "house"]}
]
}
```
### Data Fields
- `document.id` - Unique ID for the story.
- `document.kind` - "movie" or "gutenberg" depending on the source of the story.
- `document.url` - The URL where the story was downloaded from.
- `document.file_size` - File size (in bytes) of the story.
- `document.word_count` - Number of tokens in the story.
- `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified.
- `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified.
- `document.summary.text` - Text of the wikipedia summary of the story.
- `document.summary.tokens` - Tokenized version of `document.summary.text`.
- `document.summary.url` - Wikipedia URL of the summary.
- `document.summary.title` - Wikipedia Title of the summary.
- `question` - `{"text":"...", "tokens":[...]}` for the question about the story.
- `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question.
### Data Splits
The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split):
| Train | Valid | Test |
| ------ | ----- | ----- |
| 32747 | 3461 | 10557 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)).
#### Who are the source language producers?
The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions.
### Annotations
#### Annotation process
Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors.
#### Who are the annotators?
Amazon Mechanical Turk workers.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE).
### Citation Information
```
@article{kocisky-etal-2018-narrativeqa,
title = "The {N}arrative{QA} Reading Comprehension Challenge",
author = "Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and
Schwarz, Jonathan and
Blunsom, Phil and
Dyer, Chris and
Hermann, Karl Moritz and
Melis, G{\'a}bor and
Grefenstette, Edward",
editor = "Lee, Lillian and
Johnson, Mark and
Toutanova, Kristina and
Roark, Brian",
journal = "Transactions of the Association for Computational Linguistics",
volume = "6",
year = "2018",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q18-1023",
doi = "10.1162/tacl_a_00023",
pages = "317--328",
abstract = "Reading comprehension (RC){---}in contrast to information retrieval{---}requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
}
```
### Contributions
Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset. |
qiaojin/PubMedQA | qiaojin | 2024-03-06T01:50:16Z | 13,222 | 216 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1909.06146",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: pubmedqa
pretty_name: PubMedQA
config_names:
- pqa_artificial
- pqa_labeled
- pqa_unlabeled
dataset_info:
- config_name: pqa_artificial
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 443501057
num_examples: 211269
download_size: 233411194
dataset_size: 443501057
- config_name: pqa_labeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: reasoning_required_pred
dtype: string
- name: reasoning_free_pred
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 2088898
num_examples: 1000
download_size: 1075513
dataset_size: 2088898
- config_name: pqa_unlabeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
splits:
- name: train
num_bytes: 125922964
num_examples: 61249
download_size: 66010017
dataset_size: 125922964
configs:
- config_name: pqa_artificial
data_files:
- split: train
path: pqa_artificial/train-*
- config_name: pqa_labeled
data_files:
- split: train
path: pqa_labeled/train-*
- config_name: pqa_unlabeled
data_files:
- split: train
path: pqa_unlabeled/train-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PubMedQA homepage](https://pubmedqa.github.io/ )
- **Repository:** [PubMedQA repository](https://github.com/pubmedqa/pubmedqa)
- **Paper:** [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)
- **Leaderboard:** [PubMedQA: Leaderboard](https://pubmedqa.github.io/)
### Dataset Summary
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
### Supported Tasks and Leaderboards
The official leaderboard is available at: https://pubmedqa.github.io/.
500 questions in the `pqa_labeled` are used as the test set. They can be found at https://github.com/pubmedqa/pubmedqa.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
lukaemon/mmlu | lukaemon | 2024-03-04T21:42:02Z | 45,141 | 62 | [
"region:us"
] | [] | 2023-02-02T00:42:27Z | null | ---
dataset_info:
- config_name: abstract_algebra
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 18616
num_examples: 100
- name: validation
num_bytes: 1935
num_examples: 11
- name: train
num_bytes: 783
num_examples: 5
download_size: 166184960
dataset_size: 21334
- config_name: anatomy
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 32164
num_examples: 135
- name: validation
num_bytes: 3030
num_examples: 14
- name: train
num_bytes: 920
num_examples: 5
download_size: 166184960
dataset_size: 36114
- config_name: astronomy
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 45695
num_examples: 152
- name: validation
num_bytes: 4903
num_examples: 16
- name: train
num_bytes: 2029
num_examples: 5
download_size: 166184960
dataset_size: 52627
- config_name: business_ethics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 32540
num_examples: 100
- name: validation
num_bytes: 2949
num_examples: 11
- name: train
num_bytes: 2143
num_examples: 5
download_size: 166184960
dataset_size: 37632
- config_name: clinical_knowledge
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 60887
num_examples: 265
- name: validation
num_bytes: 6449
num_examples: 29
- name: train
num_bytes: 1163
num_examples: 5
download_size: 166184960
dataset_size: 68499
- config_name: college_biology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 47777
num_examples: 144
- name: validation
num_bytes: 4695
num_examples: 16
- name: train
num_bytes: 1485
num_examples: 5
download_size: 166184960
dataset_size: 53957
- config_name: college_chemistry
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 23996
num_examples: 100
- name: validation
num_bytes: 2260
num_examples: 8
- name: train
num_bytes: 1284
num_examples: 5
download_size: 166184960
dataset_size: 27540
- config_name: college_computer_science
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 41927
num_examples: 100
- name: validation
num_bytes: 4574
num_examples: 11
- name: train
num_bytes: 2718
num_examples: 5
download_size: 166184960
dataset_size: 49219
- config_name: college_mathematics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 23996
num_examples: 100
- name: validation
num_bytes: 2579
num_examples: 11
- name: train
num_bytes: 1446
num_examples: 5
download_size: 166184960
dataset_size: 28021
- config_name: college_medicine
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 81174
num_examples: 173
- name: validation
num_bytes: 7743
num_examples: 22
- name: train
num_bytes: 1623
num_examples: 5
download_size: 166184960
dataset_size: 90540
- config_name: college_physics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 29454
num_examples: 102
- name: validation
num_bytes: 3401
num_examples: 11
- name: train
num_bytes: 1365
num_examples: 5
download_size: 166184960
dataset_size: 34220
- config_name: computer_security
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 26412
num_examples: 100
- name: validation
num_bytes: 4460
num_examples: 11
- name: train
num_bytes: 1054
num_examples: 5
download_size: 166184960
dataset_size: 31926
- config_name: conceptual_physics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 39052
num_examples: 235
- name: validation
num_bytes: 4279
num_examples: 26
- name: train
num_bytes: 887
num_examples: 5
download_size: 166184960
dataset_size: 44218
- config_name: econometrics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 45737
num_examples: 114
- name: validation
num_bytes: 4871
num_examples: 12
- name: train
num_bytes: 1597
num_examples: 5
download_size: 166184960
dataset_size: 52205
- config_name: electrical_engineering
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 24111
num_examples: 145
- name: validation
num_bytes: 2778
num_examples: 16
- name: train
num_bytes: 925
num_examples: 5
download_size: 166184960
dataset_size: 27814
- config_name: elementary_mathematics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 67450
num_examples: 378
- name: validation
num_bytes: 8689
num_examples: 41
- name: train
num_bytes: 1393
num_examples: 5
download_size: 166184960
dataset_size: 77532
- config_name: formal_logic
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 48891
num_examples: 126
- name: validation
num_bytes: 6142
num_examples: 14
- name: train
num_bytes: 1710
num_examples: 5
download_size: 166184960
dataset_size: 56743
- config_name: global_facts
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 17691
num_examples: 100
- name: validation
num_bytes: 1783
num_examples: 10
- name: train
num_bytes: 1182
num_examples: 5
download_size: 166184960
dataset_size: 20656
- config_name: high_school_biology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 107550
num_examples: 310
- name: validation
num_bytes: 10786
num_examples: 32
- name: train
num_bytes: 1626
num_examples: 5
download_size: 166184960
dataset_size: 119962
- config_name: high_school_chemistry
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 57031
num_examples: 203
- name: validation
num_bytes: 6926
num_examples: 22
- name: train
num_bytes: 1173
num_examples: 5
download_size: 166184960
dataset_size: 65130
- config_name: high_school_computer_science
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 43764
num_examples: 100
- name: validation
num_bytes: 3268
num_examples: 9
- name: train
num_bytes: 2871
num_examples: 5
download_size: 166184960
dataset_size: 49903
- config_name: high_school_european_history
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 269133
num_examples: 165
- name: validation
num_bytes: 29494
num_examples: 18
- name: train
num_bytes: 11517
num_examples: 5
download_size: 166184960
dataset_size: 310144
- config_name: high_school_geography
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 40636
num_examples: 198
- name: validation
num_bytes: 4166
num_examples: 22
- name: train
num_bytes: 1356
num_examples: 5
download_size: 166184960
dataset_size: 46158
- config_name: high_school_government_and_politics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 64711
num_examples: 193
- name: validation
num_bytes: 6904
num_examples: 21
- name: train
num_bytes: 1732
num_examples: 5
download_size: 166184960
dataset_size: 73347
- config_name: high_school_macroeconomics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 114945
num_examples: 390
- name: validation
num_bytes: 12707
num_examples: 43
- name: train
num_bytes: 1281
num_examples: 5
download_size: 166184960
dataset_size: 128933
- config_name: high_school_mathematics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 52952
num_examples: 270
- name: validation
num_bytes: 5550
num_examples: 29
- name: train
num_bytes: 1250
num_examples: 5
download_size: 166184960
dataset_size: 59752
- config_name: high_school_microeconomics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 74025
num_examples: 238
- name: validation
num_bytes: 7359
num_examples: 26
- name: train
num_bytes: 1251
num_examples: 5
download_size: 166184960
dataset_size: 82635
- config_name: high_school_physics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 58469
num_examples: 151
- name: validation
num_bytes: 6640
num_examples: 17
- name: train
num_bytes: 1442
num_examples: 5
download_size: 166184960
dataset_size: 66551
- config_name: high_school_psychology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 155580
num_examples: 545
- name: validation
num_bytes: 16837
num_examples: 60
- name: train
num_bytes: 1858
num_examples: 5
download_size: 166184960
dataset_size: 174275
- config_name: high_school_statistics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 109178
num_examples: 216
- name: validation
num_bytes: 9824
num_examples: 23
- name: train
num_bytes: 2481
num_examples: 5
download_size: 166184960
dataset_size: 121483
- config_name: high_school_us_history
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 295294
num_examples: 204
- name: validation
num_bytes: 31540
num_examples: 22
- name: train
num_bytes: 8817
num_examples: 5
download_size: 166184960
dataset_size: 335651
- config_name: high_school_world_history
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 376946
num_examples: 237
- name: validation
num_bytes: 45307
num_examples: 26
- name: train
num_bytes: 4835
num_examples: 5
download_size: 166184960
dataset_size: 427088
- config_name: human_aging
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 44525
num_examples: 223
- name: validation
num_bytes: 4534
num_examples: 23
- name: train
num_bytes: 961
num_examples: 5
download_size: 166184960
dataset_size: 50020
- config_name: human_sexuality
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 31181
num_examples: 131
- name: validation
num_bytes: 2325
num_examples: 12
- name: train
num_bytes: 1030
num_examples: 5
download_size: 166184960
dataset_size: 34536
- config_name: international_law
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 52672
num_examples: 121
- name: validation
num_bytes: 6370
num_examples: 13
- name: train
num_bytes: 2371
num_examples: 5
download_size: 166184960
dataset_size: 61413
- config_name: jurisprudence
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 33218
num_examples: 108
- name: validation
num_bytes: 3640
num_examples: 11
- name: train
num_bytes: 1256
num_examples: 5
download_size: 166184960
dataset_size: 38114
- config_name: logical_fallacies
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 48964
num_examples: 163
- name: validation
num_bytes: 4965
num_examples: 18
- name: train
num_bytes: 1526
num_examples: 5
download_size: 166184960
dataset_size: 55455
- config_name: machine_learning
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 33084
num_examples: 112
- name: validation
num_bytes: 3143
num_examples: 11
- name: train
num_bytes: 2276
num_examples: 5
download_size: 166184960
dataset_size: 38503
- config_name: management
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 19269
num_examples: 103
- name: validation
num_bytes: 1731
num_examples: 11
- name: train
num_bytes: 851
num_examples: 5
download_size: 166184960
dataset_size: 21851
- config_name: marketing
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 61375
num_examples: 234
- name: validation
num_bytes: 7207
num_examples: 25
- name: train
num_bytes: 1434
num_examples: 5
download_size: 166184960
dataset_size: 70016
- config_name: medical_genetics
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 20152
num_examples: 100
- name: validation
num_bytes: 2916
num_examples: 11
- name: train
num_bytes: 1042
num_examples: 5
download_size: 166184960
dataset_size: 24110
- config_name: miscellaneous
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 142211
num_examples: 783
- name: validation
num_bytes: 13716
num_examples: 86
- name: train
num_bytes: 652
num_examples: 5
download_size: 166184960
dataset_size: 156579
- config_name: moral_disputes
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 105384
num_examples: 346
- name: validation
num_bytes: 12142
num_examples: 38
- name: train
num_bytes: 1708
num_examples: 5
download_size: 166184960
dataset_size: 119234
- config_name: moral_scenarios
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 367749
num_examples: 895
- name: validation
num_bytes: 41626
num_examples: 100
- name: train
num_bytes: 2011
num_examples: 5
download_size: 166184960
dataset_size: 411386
- config_name: nutrition
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 90256
num_examples: 306
- name: validation
num_bytes: 8193
num_examples: 33
- name: train
num_bytes: 2038
num_examples: 5
download_size: 166184960
dataset_size: 100487
- config_name: philosophy
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 77884
num_examples: 311
- name: validation
num_bytes: 8934
num_examples: 34
- name: train
num_bytes: 941
num_examples: 5
download_size: 166184960
dataset_size: 87759
- config_name: prehistory
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 87314
num_examples: 324
- name: validation
num_bytes: 10028
num_examples: 35
- name: train
num_bytes: 1831
num_examples: 5
download_size: 166184960
dataset_size: 99173
- config_name: professional_accounting
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 122564
num_examples: 282
- name: validation
num_bytes: 14143
num_examples: 31
- name: train
num_bytes: 2101
num_examples: 5
download_size: 166184960
dataset_size: 138808
- config_name: professional_law
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 1881012
num_examples: 1534
- name: validation
num_bytes: 202317
num_examples: 170
- name: train
num_bytes: 6563
num_examples: 5
download_size: 166184960
dataset_size: 2089892
- config_name: professional_medicine
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 215645
num_examples: 272
- name: validation
num_bytes: 23618
num_examples: 31
- name: train
num_bytes: 3760
num_examples: 5
download_size: 166184960
dataset_size: 243023
- config_name: professional_psychology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 221603
num_examples: 612
- name: validation
num_bytes: 28606
num_examples: 69
- name: train
num_bytes: 2220
num_examples: 5
download_size: 166184960
dataset_size: 252429
- config_name: public_relations
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 27978
num_examples: 110
- name: validation
num_bytes: 4470
num_examples: 12
- name: train
num_bytes: 1449
num_examples: 5
download_size: 166184960
dataset_size: 33897
- config_name: security_studies
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 203117
num_examples: 245
- name: validation
num_bytes: 22436
num_examples: 27
- name: train
num_bytes: 5288
num_examples: 5
download_size: 166184960
dataset_size: 230841
- config_name: sociology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 64824
num_examples: 201
- name: validation
num_bytes: 7018
num_examples: 22
- name: train
num_bytes: 1566
num_examples: 5
download_size: 166184960
dataset_size: 73408
- config_name: us_foreign_policy
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 27731
num_examples: 100
- name: validation
num_bytes: 3175
num_examples: 11
- name: train
num_bytes: 1564
num_examples: 5
download_size: 166184960
dataset_size: 32470
- config_name: virology
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 37585
num_examples: 166
- name: validation
num_bytes: 5325
num_examples: 18
- name: train
num_bytes: 1049
num_examples: 5
download_size: 166184960
dataset_size: 43959
- config_name: world_religions
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 24065
num_examples: 171
- name: validation
num_bytes: 2620
num_examples: 19
- name: train
num_bytes: 623
num_examples: 5
download_size: 166184960
dataset_size: 27308
---
# MMLU dataset
Measuring Massive Multitask Language Understanding: https://github.com/hendrycks/test
task_list = [
"high_school_european_history",
"business_ethics",
"clinical_knowledge",
"medical_genetics",
"high_school_us_history",
"high_school_physics",
"high_school_world_history",
"virology",
"high_school_microeconomics",
"econometrics",
"college_computer_science",
"high_school_biology",
"abstract_algebra",
"professional_accounting",
"philosophy",
"professional_medicine",
"nutrition",
"global_facts",
"machine_learning",
"security_studies",
"public_relations",
"professional_psychology",
"prehistory",
"anatomy",
"human_sexuality",
"college_medicine",
"high_school_government_and_politics",
"college_chemistry",
"logical_fallacies",
"high_school_geography",
"elementary_mathematics",
"human_aging",
"college_mathematics",
"high_school_psychology",
"formal_logic",
"high_school_statistics",
"international_law",
"high_school_mathematics",
"high_school_computer_science",
"conceptual_physics",
"miscellaneous",
"high_school_chemistry",
"marketing",
"professional_law",
"management",
"college_physics",
"jurisprudence",
"world_religions",
"sociology",
"us_foreign_policy",
"high_school_macroeconomics",
"computer_security",
"moral_scenarios",
"moral_disputes",
"electrical_engineering",
"astronomy",
"college_biology",
]
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
``` |
google/Synthetic-Persona-Chat | google | 2024-03-01T01:01:01Z | 895 | 107 | [
"task_categories:text2text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2312.10007",
"region:us"
] | [
"text2text-generation"
] | 2023-12-20T09:59:14Z | null | ---
license: cc-by-4.0
task_categories:
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for SPC: Synthetic-Persona-Chat Dataset
Abstract from the paper introducing this dataset:
> High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.
## Dataset Details
### Dataset Description
> We introduce the Synthetic-Persona-Chat dataset, a persona-based conversational dataset, consisting of two parts. The first part, consisting of 4,723 personas and 10,906 conversations, is an extension to Persona-Chat, which has the same user profile pairs as Persona-Chat but new synthetic conversations, with the same train/validation/test split as Persona-Chat. The second part is new synthetic personas and synthetic conversations based on that, consisting of 5,648 synthetic personas and 11,001 conversations. Synthetic-Persona-Chat is created using the Generator-Critic framework introduced in Faithful Persona-based Conversational Dataset Generation with Large Language Models.
Each conversation in the dataset has the following format:
```
{
"User 1 Persona":[],
"User 2 Persona":[],
"Conversation":[]
}
```
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/google-research-datasets/Synthetic-Persona-Chat/tree/main
- **Paper:** https://arxiv.org/abs/2312.10007
## Citation
**BibTeX:**
```@misc{jandaghi2023faithful,
title={Faithful Persona-based Conversational Dataset Generation with Large Language Models},
author={Pegah Jandaghi and XiangHai Sheng and Xinyi Bai and Jay Pujara and Hakim Sidahmed},
year={2023},
eprint={2312.10007},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
heliosbrahma/mental_health_chatbot_dataset | heliosbrahma | 2024-02-29T18:40:22Z | 1,018 | 86 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"medical"
] | [
"text-generation"
] | 2023-08-02T09:36:25Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_examples: 172
license: mit
task_categories:
- text-generation
language:
- en
tags:
- medical
pretty_name: Mental Health Chatbot Dataset
size_categories:
- n<1K
---
# Dataset Card for "heliosbrahma/mental_health_chatbot_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
### Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance include a text columns which is a conversational pair of questions and answers. Questions were asked by the patients and answers were given by healthcare providers.
### Data Fields
- 'text': conversational pair of questions and answers between patient and healthcare provider.
## Dataset Creation
### Curation Rationale
Chatbots offer a readily available and accessible platform for individuals seeking support. They can be accessed anytime and anywhere, providing immediate assistance to those in need. Chatbots can offer empathetic and non-judgmental responses, providing emotional support to users. While they cannot replace human interaction entirely, they can be a helpful supplement, especially in moments of distress.
Hence, this dataset was curated to help finetune a conversational AI bot using this custom dataset which can then be deployed and be provided to the end patient as a chatbot.
### Source Data
This dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All questions and answers have been anonymized to remove any PII data. |
Helsinki-NLP/multiun | Helsinki-NLP | 2024-02-27T16:59:52Z | 2,893 | 12 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ru",
"language:zh",
"license:unknown",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- de
- en
- es
- fr
- ru
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: multiun
pretty_name: MultiUN (Multilingual Corpus from United Nation Documents)
config_names:
- ar-de
- ar-en
- ar-es
- ar-fr
- ar-ru
- ar-zh
- de-en
- de-es
- de-fr
- de-ru
- de-zh
- en-es
- en-fr
- en-ru
- en-zh
- es-fr
- es-ru
- es-zh
- fr-ru
- fr-zh
- ru-zh
dataset_info:
- config_name: ar-de
features:
- name: translation
dtype:
translation:
languages:
- ar
- de
splits:
- name: train
num_bytes: 94466261
num_examples: 165090
download_size: 41124373
dataset_size: 94466261
- config_name: ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 4189844561
num_examples: 9759125
download_size: 1926776740
dataset_size: 4189844561
- config_name: ar-es
features:
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 4509667188
num_examples: 10119379
download_size: 2069474168
dataset_size: 4509667188
- config_name: ar-fr
features:
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 4516842065
num_examples: 9929567
download_size: 2083442998
dataset_size: 4516842065
- config_name: ar-ru
features:
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 5932858699
num_examples: 10206243
download_size: 2544128334
dataset_size: 5932858699
- config_name: ar-zh
features:
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 3781650541
num_examples: 9832293
download_size: 1829880809
dataset_size: 3781650541
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 76684413
num_examples: 162981
download_size: 35105094
dataset_size: 76684413
- config_name: de-es
features:
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 80936517
num_examples: 162078
download_size: 37042740
dataset_size: 80936517
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 81888299
num_examples: 164025
download_size: 37827000
dataset_size: 81888299
- config_name: de-ru
features:
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 111517798
num_examples: 164792
download_size: 46723695
dataset_size: 111517798
- config_name: de-zh
features:
- name: translation
dtype:
translation:
languages:
- de
- zh
splits:
- name: train
num_bytes: 70534674
num_examples: 176933
download_size: 34964647
dataset_size: 70534674
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 4128132575
num_examples: 11350967
download_size: 2030826335
dataset_size: 4128132575
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 4678044616
num_examples: 13172019
download_size: 2312275443
dataset_size: 4678044616
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 5632653511
num_examples: 11654416
download_size: 2523567444
dataset_size: 5632653511
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 2960368390
num_examples: 9564315
download_size: 1557547095
dataset_size: 2960368390
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 4454703338
num_examples: 11441889
download_size: 2187539838
dataset_size: 4454703338
- config_name: es-ru
features:
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 5442647242
num_examples: 10605056
download_size: 2432480744
dataset_size: 5442647242
- config_name: es-zh
features:
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 3223863318
num_examples: 9847770
download_size: 1676774308
dataset_size: 3223863318
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 5979869673
num_examples: 11761738
download_size: 2690520032
dataset_size: 5979869673
- config_name: fr-zh
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 3241090573
num_examples: 9690914
download_size: 1693120344
dataset_size: 3241090573
- config_name: ru-zh
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 4233867889
num_examples: 9557007
download_size: 1984600328
dataset_size: 4233867889
configs:
- config_name: ar-de
data_files:
- split: train
path: ar-de/train-*
- config_name: ar-en
data_files:
- split: train
path: ar-en/train-*
- config_name: ar-es
data_files:
- split: train
path: ar-es/train-*
- config_name: ar-fr
data_files:
- split: train
path: ar-fr/train-*
- config_name: ar-ru
data_files:
- split: train
path: ar-ru/train-*
- config_name: ar-zh
data_files:
- split: train
path: ar-zh/train-*
- config_name: de-en
data_files:
- split: train
path: de-en/train-*
- config_name: de-es
data_files:
- split: train
path: de-es/train-*
- config_name: de-fr
data_files:
- split: train
path: de-fr/train-*
- config_name: de-ru
data_files:
- split: train
path: de-ru/train-*
- config_name: de-zh
data_files:
- split: train
path: de-zh/train-*
- config_name: en-es
data_files:
- split: train
path: en-es/train-*
- config_name: en-fr
data_files:
- split: train
path: en-fr/train-*
- config_name: en-ru
data_files:
- split: train
path: en-ru/train-*
- config_name: en-zh
data_files:
- split: train
path: en-zh/train-*
- config_name: es-fr
data_files:
- split: train
path: es-fr/train-*
- config_name: es-ru
data_files:
- split: train
path: es-ru/train-*
- config_name: es-zh
data_files:
- split: train
path: es-zh/train-*
- config_name: fr-ru
data_files:
- split: train
path: fr-ru/train-*
- config_name: fr-zh
data_files:
- split: train
path: fr-zh/train-*
- config_name: ru-zh
data_files:
- split: train
path: ru-zh/train-*
---
# Dataset Card for OPUS MultiUN
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/MultiUN/corpus/version/MultiUN
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://aclanthology.org/L10-1473/
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The MultiUN parallel corpus is extracted from the United Nations Website , and then cleaned and converted to XML at Language Technology Lab in DFKI GmbH (LT-DFKI), Germany. The documents were published by UN from 2000 to 2009.
This is a collection of translated documents from the United Nations originally compiled by Andreas Eisele and Yu Chen (see http://www.euromatrixplus.net/multi-un/).
This corpus is available in all 6 official languages of the UN consisting of around 300 million words per language
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
Parallel texts are present in all six official languages: Arabic (`ar`), Chinese (`zh`), English (`en`), French (`fr`),
Russian (`ru`) and Spanish (`es`), with a small part of the documents available also in German (`de`).
## Dataset Structure
### Data Instances
```
{
"translation": {
"ar": "قرار اتخذته الجمعية العامة",
"de": "Resolution der Generalversammlung"
}
}
```
### Data Fields
- `translation` (`dict`): Parallel sentences for the pair of languages.
### Data Splits
The dataset contains a single "train" split for each language pair.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Original MultiUN source data: http://www.euromatrixplus.net/multi-unp
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this corpus in your work, please cite the paper:
```
@inproceedings{eisele-chen-2010-multiun,
title = "{M}ulti{UN}: A Multilingual Corpus from United Nation Documents",
author = "Eisele, Andreas and
Chen, Yu",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = "2010",
address = "Valletta, Malta",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf",
abstract = "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.",
}
```
If you use any part of the corpus (hosted in OPUS) in your own work, please cite the following article:
```
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Do{\u{g}}an, Mehmet U{\u{g}}ur and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
abstract = "This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.",
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
zeroshot/twitter-financial-news-sentiment | zeroshot | 2024-02-23T19:04:10Z | 5,546 | 142 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"twitter",
"finance",
"markets",
"stocks",
"wallstreet",
"quant",
"hedgefunds"
] | [
"text-classification"
] | 2022-09-01T21:21:56Z | null | ---
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: twitter financial news
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- twitter
- finance
- markets
- stocks
- wallstreet
- quant
- hedgefunds
- markets
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
### Dataset Description
The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.
1. The dataset holds 11,932 documents annotated with 3 labels:
```python
sentiments = {
"LABEL_0": "Bearish",
"LABEL_1": "Bullish",
"LABEL_2": "Neutral"
}
```
The data was collected using the Twitter API. The current dataset supports the multi-class classification task.
### Task: Sentiment Analysis
# Data Splits
There are 2 splits: train and validation. Below are the statistics:
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 9,938 |
| Validation | 2,486 |
# Licensing Information
The Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License. |
shibing624/sharegpt_gpt4 | shibing624 | 2024-02-23T05:38:24Z | 685 | 120 | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:text-scoring",
"annotations_creators:shibing624",
"language_creators:shibing624",
"multilinguality:monolingual",
"source_datasets:https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/tree/main",
"language:zh",
"language:en",
"language:gl",
"language:ko",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"text-generation"
] | 2023-07-27T05:45:49Z | null | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
- en
- gl
- ko
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/tree/main
task_categories:
- text-classification
- text-generation
task_ids:
- text-scoring
paperswithcode_id: LLM
pretty_name: GPT4 sharegpt
---
# Dataset Card
## Dataset Description
- **Repository:** [shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
- **Dataset:** [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- **Size of downloaded dataset files:** 772 M
- **Total amount of disk used:** 772 M
### Dataset Summary
ShareGPT中挑选出的GPT4多轮问答数据,多语言问答。
### Languages
数据集是多语言,包括中文、英文、日文等常用语言。
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- `conversations`: a List of `string` .
```shell
head -n 1 sharegpt_gpt4.jsonl
{"conversations":[
{'from': 'human',
'value': '採用優雅現代中文,用中文繁體字型,回答以下問題。為所有標題或專用字詞提供對應的英語翻譯:Using scholarly style, summarize in detail James Barr\'s book "Semantics of Biblical Language". Provide examples supplied in the book and give corresponding page numbers wherever applicable.'},
{'from': 'gpt',
'value': '《聖經語言的語義學》(Semantics of Biblical Language)是詹姆斯·巴爾(James Barr)於1961年出版的一本研究聖經語言的專著。在這部作品中,巴爾對聖經希伯來語和希臘語的語言學特徵進行了深入的探討,他批判了當時盛行的聖經研究方法,主張應該將語言學原理應用於聖經文本的詮釋,以便更準確地理解聖經的意思。\n\n巴爾在《聖經語言的語義學》中提出了幾個重要的觀點。首先,他主張語言學和語言學家在聖經研究中的角色應該受到重視。他認為,應該避免將聖經語言學和其他古典語言學區分開來,因為聖經語言和其他古代語言一樣,都有其固有的語法、語義和詞彙特點(第18-23頁)。\n\n其次,巴爾批評了當時流行的"字根意義論"(root-meaning fallacy),即過分強調單詞的字根在詮釋聖經含義時的作用。他指出,詞義是由語境決定的,而非僅僅由字根決定。巴爾舉了希伯來語單詞 "כפר"(kāpar)為例,它在不同的語境下可以表示"遮蓋"、"贖回"或"和解",而不是僅僅依賴其字根意義(第183-184頁)。\n\n第三,巴爾強調了詞彙意義在詮釋聖經時的重要性。他認為,了解單詞的意義和用法對於恰當理解聖經文本至關重要。巴爾引用了希伯來語單詞 "ברית"(berîṯ,英文:covenant)和希臘語單詞 "διαθήκη"(diathēkē,英文:testament)為例,解釋了這兩個詞在聖經中的不'},
{'from': 'human', 'value': '繼續'},
{'from': 'gpt',
'value': '同含義和用法。巴爾認為,了解這些詞語在不同語境下的意義對於理解聖經中的盟約和遺囑概念至關重要(第50-53頁)。\n\n第四,巴爾強調語境在詮釋聖經時的重要性。他認為,要理解聖經的含義,就必須考慮到文本的歷史背景、文化環境和語言風格。此外,巴爾提倡從語言學的角度研究聖經,以便更好地理解聖經的結構和風格。他以希伯來詩篇的平行主義(Parallelism)為例,闡述了語言學原理如何有助於揭示詩篇中的意象和修辭結構(第250-252頁)。\n\n總之,《聖經語言的語義學》一書對聖經研究領域產生了深遠的影響。詹姆斯·巴爾通過對聖經語言學的深入研究,提出了許多富有洞察力的觀點,尤其是關於語言學在聖經詮釋中的重要性。這些觀點對後來的聖經研究者產生了很大的啟示,使他們在研究聖經時更加重視語言學原理,從而更加準確地理解聖經的含義。'}
]
}
```
### Data Splits
```shell
> wc -l *
6206 sharegpt_gpt4.jsonl # 清洗后的高质量gpt4日常问答数据集,大小:6K,主要是知识问答、编程题、推理计算,包括简体中文、繁体中文、英文、日文、韩文等多国语言多轮对话数据集。
58674 sharegpt_V3_format.jsonl # 原V3版sharegpt规范格式后的数据集,大小:58K,主要是日常问答,提问偏口语化,多国语言,多轮对话。
38535 sharegpt_zh_38K_format.jsonl # 中文gpt4日常问答数据集,大小:38K,主要是知识问答、翻译任务、求助、编程推理任务等偏口语提问,中文,多轮对话。
103415 total
```
#### Who are the annotators?
原作者。
### Licensing Information
same to sharegpt.
### Contributions
[shibing624](https://github.com/shibing624) add this dataset. |
google/xtreme | google | 2024-02-22T17:12:06Z | 12,320 | 104 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multiple-choice-qa",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"task_ids:natural-language-inference",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"multilinguality:translation",
"source_datasets:extended|xnli",
"source_datasets:extended|paws-x",
"source_datasets:extended|wikiann",
"source_datasets:extended|xquad",
"source_datasets:extended|mlqa",
"source_datasets:extended|tydiqa",
"source_datasets:extended|tatoeba",
"source_datasets:extended|squad",
"language:af",
"language:ar",
"language:bg",
"language:bn",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:id",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:ko",
"language:ml",
"language:mr",
"language:ms",
"language:my",
"language:nl",
"language:pt",
"language:ru",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:yo",
"language:zh",
"license:apache-2.0",
"license:cc-by-4.0",
"license:cc-by-2.0",
"license:cc-by-sa-4.0",
"license:other",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2003.11080",
"region:us",
"parallel-sentence-retrieval",
"paraphrase-identification"
] | [
"multiple-choice",
"question-answering",
"token-classification",
"text-classification",
"text-retrieval",
"token-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
license:
- apache-2.0
- cc-by-4.0
- cc-by-2.0
- cc-by-sa-4.0
- other
- cc-by-nc-4.0
multilinguality:
- multilingual
- translation
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
source_datasets:
- extended|xnli
- extended|paws-x
- extended|wikiann
- extended|xquad
- extended|mlqa
- extended|tydiqa
- extended|tatoeba
- extended|squad
task_categories:
- multiple-choice
- question-answering
- token-classification
- text-classification
- text-retrieval
- token-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- natural-language-inference
- named-entity-recognition
- part-of-speech
paperswithcode_id: xtreme
pretty_name: XTREME
config_names:
- MLQA.ar.ar
- MLQA.ar.de
- MLQA.ar.en
- MLQA.ar.es
- MLQA.ar.hi
- MLQA.ar.vi
- MLQA.ar.zh
- MLQA.de.ar
- MLQA.de.de
- MLQA.de.en
- MLQA.de.es
- MLQA.de.hi
- MLQA.de.vi
- MLQA.de.zh
- MLQA.en.ar
- MLQA.en.de
- MLQA.en.en
- MLQA.en.es
- MLQA.en.hi
- MLQA.en.vi
- MLQA.en.zh
- MLQA.es.ar
- MLQA.es.de
- MLQA.es.en
- MLQA.es.es
- MLQA.es.hi
- MLQA.es.vi
- MLQA.es.zh
- MLQA.hi.ar
- MLQA.hi.de
- MLQA.hi.en
- MLQA.hi.es
- MLQA.hi.hi
- MLQA.hi.vi
- MLQA.hi.zh
- MLQA.vi.ar
- MLQA.vi.de
- MLQA.vi.en
- MLQA.vi.es
- MLQA.vi.hi
- MLQA.vi.vi
- MLQA.vi.zh
- MLQA.zh.ar
- MLQA.zh.de
- MLQA.zh.en
- MLQA.zh.es
- MLQA.zh.hi
- MLQA.zh.vi
- MLQA.zh.zh
- PAN-X.af
- PAN-X.ar
- PAN-X.bg
- PAN-X.bn
- PAN-X.de
- PAN-X.el
- PAN-X.en
- PAN-X.es
- PAN-X.et
- PAN-X.eu
- PAN-X.fa
- PAN-X.fi
- PAN-X.fr
- PAN-X.he
- PAN-X.hi
- PAN-X.hu
- PAN-X.id
- PAN-X.it
- PAN-X.ja
- PAN-X.jv
- PAN-X.ka
- PAN-X.kk
- PAN-X.ko
- PAN-X.ml
- PAN-X.mr
- PAN-X.ms
- PAN-X.my
- PAN-X.nl
- PAN-X.pt
- PAN-X.ru
- PAN-X.sw
- PAN-X.ta
- PAN-X.te
- PAN-X.th
- PAN-X.tl
- PAN-X.tr
- PAN-X.ur
- PAN-X.vi
- PAN-X.yo
- PAN-X.zh
- PAWS-X.de
- PAWS-X.en
- PAWS-X.es
- PAWS-X.fr
- PAWS-X.ja
- PAWS-X.ko
- PAWS-X.zh
- SQuAD
- XNLI
- XQuAD
- bucc18.de
- bucc18.fr
- bucc18.ru
- bucc18.zh
- tatoeba.afr
- tatoeba.ara
- tatoeba.ben
- tatoeba.bul
- tatoeba.cmn
- tatoeba.deu
- tatoeba.ell
- tatoeba.est
- tatoeba.eus
- tatoeba.fin
- tatoeba.fra
- tatoeba.heb
- tatoeba.hin
- tatoeba.hun
- tatoeba.ind
- tatoeba.ita
- tatoeba.jav
- tatoeba.jpn
- tatoeba.kat
- tatoeba.kaz
- tatoeba.kor
- tatoeba.mal
- tatoeba.mar
- tatoeba.nld
- tatoeba.pes
- tatoeba.por
- tatoeba.rus
- tatoeba.spa
- tatoeba.swh
- tatoeba.tam
- tatoeba.tel
- tatoeba.tgl
- tatoeba.tha
- tatoeba.tur
- tatoeba.urd
- tatoeba.vie
- tydiqa
- udpos.Afrikans
- udpos.Arabic
- udpos.Basque
- udpos.Bulgarian
- udpos.Chinese
- udpos.Dutch
- udpos.English
- udpos.Estonian
- udpos.Finnish
- udpos.French
- udpos.German
- udpos.Greek
- udpos.Hebrew
- udpos.Hindi
- udpos.Hungarian
- udpos.Indonesian
- udpos.Italian
- udpos.Japanese
- udpos.Kazakh
- udpos.Korean
- udpos.Marathi
- udpos.Persian
- udpos.Portuguese
- udpos.Russian
- udpos.Spanish
- udpos.Tagalog
- udpos.Tamil
- udpos.Telugu
- udpos.Thai
- udpos.Turkish
- udpos.Urdu
- udpos.Vietnamese
- udpos.Yoruba
language_bcp47:
- fa-IR
license_details: Licence Universal Dependencies v2.5
tags:
- parallel-sentence-retrieval
- paraphrase-identification
dataset_info:
- config_name: MLQA.ar.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 8368086
num_examples: 5335
- name: validation
num_bytes: 824080
num_examples: 517
download_size: 4048180
dataset_size: 9192166
- config_name: MLQA.ar.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2183914
num_examples: 1649
- name: validation
num_bytes: 364809
num_examples: 207
download_size: 1192825
dataset_size: 2548723
- config_name: MLQA.ar.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 8225634
num_examples: 5335
- name: validation
num_bytes: 810061
num_examples: 517
download_size: 3998008
dataset_size: 9035695
- config_name: MLQA.ar.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3041350
num_examples: 1978
- name: validation
num_bytes: 228152
num_examples: 161
download_size: 1531661
dataset_size: 3269502
- config_name: MLQA.ar.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3039368
num_examples: 1831
- name: validation
num_bytes: 281742
num_examples: 186
download_size: 1369756
dataset_size: 3321110
- config_name: MLQA.ar.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3290601
num_examples: 2047
- name: validation
num_bytes: 288418
num_examples: 163
download_size: 1667238
dataset_size: 3579019
- config_name: MLQA.ar.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3229844
num_examples: 1912
- name: validation
num_bytes: 340021
num_examples: 188
download_size: 1591445
dataset_size: 3569865
- config_name: MLQA.de.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1619978
num_examples: 1649
- name: validation
num_bytes: 200146
num_examples: 207
download_size: 1044483
dataset_size: 1820124
- config_name: MLQA.de.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4366074
num_examples: 4517
- name: validation
num_bytes: 488339
num_examples: 512
download_size: 2798050
dataset_size: 4854413
- config_name: MLQA.de.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4343116
num_examples: 4517
- name: validation
num_bytes: 485866
num_examples: 512
download_size: 2778346
dataset_size: 4828982
- config_name: MLQA.de.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1716587
num_examples: 1776
- name: validation
num_bytes: 170554
num_examples: 196
download_size: 1118751
dataset_size: 1887141
- config_name: MLQA.de.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1371046
num_examples: 1430
- name: validation
num_bytes: 153843
num_examples: 163
download_size: 880652
dataset_size: 1524889
- config_name: MLQA.de.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1688455
num_examples: 1675
- name: validation
num_bytes: 216047
num_examples: 182
download_size: 1108163
dataset_size: 1904502
- config_name: MLQA.de.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1679152
num_examples: 1621
- name: validation
num_bytes: 184290
num_examples: 190
download_size: 1045861
dataset_size: 1863442
- config_name: MLQA.en.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6739191
num_examples: 5335
- name: validation
num_bytes: 630815
num_examples: 517
download_size: 3939135
dataset_size: 7370006
- config_name: MLQA.en.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 5056694
num_examples: 4517
- name: validation
num_bytes: 594908
num_examples: 512
download_size: 3223196
dataset_size: 5651602
- config_name: MLQA.en.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 14004592
num_examples: 11590
- name: validation
num_bytes: 1329084
num_examples: 1148
download_size: 8217519
dataset_size: 15333676
- config_name: MLQA.en.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6179221
num_examples: 5253
- name: validation
num_bytes: 555434
num_examples: 500
download_size: 3776828
dataset_size: 6734655
- config_name: MLQA.en.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6378838
num_examples: 4918
- name: validation
num_bytes: 623143
num_examples: 507
download_size: 3517340
dataset_size: 7001981
- config_name: MLQA.en.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 7056670
num_examples: 5495
- name: validation
num_bytes: 640618
num_examples: 511
download_size: 4170642
dataset_size: 7697288
- config_name: MLQA.en.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6539279
num_examples: 5137
- name: validation
num_bytes: 608416
num_examples: 504
download_size: 3929122
dataset_size: 7147695
- config_name: MLQA.es.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1740254
num_examples: 1978
- name: validation
num_bytes: 148621
num_examples: 161
download_size: 1107435
dataset_size: 1888875
- config_name: MLQA.es.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1403997
num_examples: 1776
- name: validation
num_bytes: 144158
num_examples: 196
download_size: 950448
dataset_size: 1548155
- config_name: MLQA.es.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4362709
num_examples: 5253
- name: validation
num_bytes: 419040
num_examples: 500
download_size: 2842879
dataset_size: 4781749
- config_name: MLQA.es.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4394305
num_examples: 5253
- name: validation
num_bytes: 422043
num_examples: 500
download_size: 2856931
dataset_size: 4816348
- config_name: MLQA.es.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1523495
num_examples: 1723
- name: validation
num_bytes: 181806
num_examples: 187
download_size: 954018
dataset_size: 1705301
- config_name: MLQA.es.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1747941
num_examples: 2018
- name: validation
num_bytes: 176813
num_examples: 189
download_size: 1187949
dataset_size: 1924754
- config_name: MLQA.es.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1678423
num_examples: 1947
- name: validation
num_bytes: 126618
num_examples: 161
download_size: 1100765
dataset_size: 1805041
- config_name: MLQA.hi.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4445561
num_examples: 1831
- name: validation
num_bytes: 410396
num_examples: 186
download_size: 1542768
dataset_size: 4855957
- config_name: MLQA.hi.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3022836
num_examples: 1430
- name: validation
num_bytes: 301685
num_examples: 163
download_size: 1257846
dataset_size: 3324521
- config_name: MLQA.hi.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 11449233
num_examples: 4918
- name: validation
num_bytes: 1097829
num_examples: 507
download_size: 4131083
dataset_size: 12547062
- config_name: MLQA.hi.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3862201
num_examples: 1723
- name: validation
num_bytes: 420374
num_examples: 187
download_size: 1493468
dataset_size: 4282575
- config_name: MLQA.hi.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 11810447
num_examples: 4918
- name: validation
num_bytes: 1136756
num_examples: 507
download_size: 4235981
dataset_size: 12947203
- config_name: MLQA.hi.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4743456
num_examples: 1947
- name: validation
num_bytes: 419078
num_examples: 177
download_size: 1704964
dataset_size: 5162534
- config_name: MLQA.hi.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4354847
num_examples: 1767
- name: validation
num_bytes: 424218
num_examples: 189
download_size: 1627107
dataset_size: 4779065
- config_name: MLQA.vi.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3205157
num_examples: 2047
- name: validation
num_bytes: 230307
num_examples: 163
download_size: 1656661
dataset_size: 3435464
- config_name: MLQA.vi.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2227005
num_examples: 1675
- name: validation
num_bytes: 277157
num_examples: 182
download_size: 1268041
dataset_size: 2504162
- config_name: MLQA.vi.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 7843403
num_examples: 5495
- name: validation
num_bytes: 719245
num_examples: 511
download_size: 4071703
dataset_size: 8562648
- config_name: MLQA.vi.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2866569
num_examples: 2018
- name: validation
num_bytes: 283433
num_examples: 189
download_size: 1607926
dataset_size: 3150002
- config_name: MLQA.vi.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2776636
num_examples: 1947
- name: validation
num_bytes: 254979
num_examples: 177
download_size: 1366057
dataset_size: 3031615
- config_name: MLQA.vi.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 7922057
num_examples: 5495
- name: validation
num_bytes: 726490
num_examples: 511
download_size: 4105388
dataset_size: 8648547
- config_name: MLQA.vi.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2989632
num_examples: 1943
- name: validation
num_bytes: 269361
num_examples: 184
download_size: 1570393
dataset_size: 3258993
- config_name: MLQA.zh.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1731455
num_examples: 1912
- name: validation
num_bytes: 175321
num_examples: 188
download_size: 1223863
dataset_size: 1906776
- config_name: MLQA.zh.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1389990
num_examples: 1621
- name: validation
num_bytes: 174577
num_examples: 190
download_size: 1006829
dataset_size: 1564567
- config_name: MLQA.zh.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4450957
num_examples: 5137
- name: validation
num_bytes: 446840
num_examples: 504
download_size: 3108433
dataset_size: 4897797
- config_name: MLQA.zh.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1736255
num_examples: 1947
- name: validation
num_bytes: 138045
num_examples: 161
download_size: 1223467
dataset_size: 1874300
- config_name: MLQA.zh.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1578191
num_examples: 1767
- name: validation
num_bytes: 184373
num_examples: 189
download_size: 1044599
dataset_size: 1762564
- config_name: MLQA.zh.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1806158
num_examples: 1943
- name: validation
num_bytes: 172906
num_examples: 184
download_size: 1268213
dataset_size: 1979064
- config_name: MLQA.zh.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4422322
num_examples: 5137
- name: validation
num_bytes: 443782
num_examples: 504
download_size: 3105362
dataset_size: 4866104
- config_name: PAN-X.af
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1321376
num_examples: 5000
- name: validation
num_bytes: 259689
num_examples: 1000
- name: test
num_bytes: 257184
num_examples: 1000
download_size: 389015
dataset_size: 1838249
- config_name: PAN-X.ar
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3634096
num_examples: 20000
- name: validation
num_bytes: 1808283
num_examples: 10000
- name: test
num_bytes: 1811963
num_examples: 10000
download_size: 1567470
dataset_size: 7254342
- config_name: PAN-X.bg
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4600733
num_examples: 20000
- name: validation
num_bytes: 2310294
num_examples: 10000
- name: test
num_bytes: 2306138
num_examples: 10000
download_size: 2030669
dataset_size: 9217165
- config_name: PAN-X.bn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1568825
num_examples: 10000
- name: validation
num_bytes: 159068
num_examples: 1000
- name: test
num_bytes: 159262
num_examples: 1000
download_size: 364024
dataset_size: 1887155
- config_name: PAN-X.de
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4762312
num_examples: 20000
- name: validation
num_bytes: 2381545
num_examples: 10000
- name: test
num_bytes: 2377619
num_examples: 10000
download_size: 2360242
dataset_size: 9521476
- config_name: PAN-X.el
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 5063136
num_examples: 20000
- name: validation
num_bytes: 2533786
num_examples: 10000
- name: test
num_bytes: 2547574
num_examples: 10000
download_size: 2271726
dataset_size: 10144496
- config_name: PAN-X.en
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3823434
num_examples: 20000
- name: validation
num_bytes: 1920049
num_examples: 10000
- name: test
num_bytes: 1916200
num_examples: 10000
download_size: 1886284
dataset_size: 7659683
- config_name: PAN-X.es
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3199121
num_examples: 20000
- name: validation
num_bytes: 1592505
num_examples: 10000
- name: test
num_bytes: 1602271
num_examples: 10000
download_size: 1489562
dataset_size: 6393897
- config_name: PAN-X.et
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3023171
num_examples: 15000
- name: validation
num_bytes: 2030140
num_examples: 10000
- name: test
num_bytes: 2021389
num_examples: 10000
download_size: 1915624
dataset_size: 7074700
- config_name: PAN-X.eu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2292307
num_examples: 10000
- name: validation
num_bytes: 2296315
num_examples: 10000
- name: test
num_bytes: 2249815
num_examples: 10000
download_size: 1393179
dataset_size: 6838437
- config_name: PAN-X.fa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3529314
num_examples: 20000
- name: validation
num_bytes: 1782286
num_examples: 10000
- name: test
num_bytes: 1770264
num_examples: 10000
download_size: 1401208
dataset_size: 7081864
- config_name: PAN-X.fi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4273753
num_examples: 20000
- name: validation
num_bytes: 2131749
num_examples: 10000
- name: test
num_bytes: 2130645
num_examples: 10000
download_size: 2459149
dataset_size: 8536147
- config_name: PAN-X.fr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3335384
num_examples: 20000
- name: validation
num_bytes: 1664170
num_examples: 10000
- name: test
num_bytes: 1675765
num_examples: 10000
download_size: 1679283
dataset_size: 6675319
- config_name: PAN-X.he
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4667060
num_examples: 20000
- name: validation
num_bytes: 2332740
num_examples: 10000
- name: test
num_bytes: 2318736
num_examples: 10000
download_size: 2186463
dataset_size: 9318536
- config_name: PAN-X.hi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 964192
num_examples: 5000
- name: validation
num_bytes: 190651
num_examples: 1000
- name: test
num_bytes: 196170
num_examples: 1000
download_size: 266086
dataset_size: 1351013
- config_name: PAN-X.hu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4499874
num_examples: 20000
- name: validation
num_bytes: 2211831
num_examples: 10000
- name: test
num_bytes: 2249759
num_examples: 10000
download_size: 2399390
dataset_size: 8961464
- config_name: PAN-X.id
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3083967
num_examples: 20000
- name: validation
num_bytes: 1537959
num_examples: 10000
- name: test
num_bytes: 1536859
num_examples: 10000
download_size: 1412049
dataset_size: 6158785
- config_name: PAN-X.it
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3874623
num_examples: 20000
- name: validation
num_bytes: 1908509
num_examples: 10000
- name: test
num_bytes: 1928388
num_examples: 10000
download_size: 1855798
dataset_size: 7711520
- config_name: PAN-X.ja
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 12670361
num_examples: 20000
- name: validation
num_bytes: 6322983
num_examples: 10000
- name: test
num_bytes: 6448940
num_examples: 10000
download_size: 2465674
dataset_size: 25442284
- config_name: PAN-X.jv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 16086
num_examples: 100
- name: validation
num_bytes: 14580
num_examples: 100
- name: test
num_bytes: 16897
num_examples: 100
download_size: 20475
dataset_size: 47563
- config_name: PAN-X.ka
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2777342
num_examples: 10000
- name: validation
num_bytes: 2806881
num_examples: 10000
- name: test
num_bytes: 2824621
num_examples: 10000
download_size: 1817280
dataset_size: 8408844
- config_name: PAN-X.kk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 240256
num_examples: 1000
- name: validation
num_bytes: 238089
num_examples: 1000
- name: test
num_bytes: 236704
num_examples: 1000
download_size: 160554
dataset_size: 715049
- config_name: PAN-X.ko
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4284693
num_examples: 20000
- name: validation
num_bytes: 2138147
num_examples: 10000
- name: test
num_bytes: 2138274
num_examples: 10000
download_size: 2539591
dataset_size: 8561114
- config_name: PAN-X.ml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2865184
num_examples: 10000
- name: validation
num_bytes: 290735
num_examples: 1000
- name: test
num_bytes: 276906
num_examples: 1000
download_size: 852955
dataset_size: 3432825
- config_name: PAN-X.mr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1248239
num_examples: 5000
- name: validation
num_bytes: 245338
num_examples: 1000
- name: test
num_bytes: 255884
num_examples: 1000
download_size: 347215
dataset_size: 1749461
- config_name: PAN-X.ms
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2965008
num_examples: 20000
- name: validation
num_bytes: 147495
num_examples: 1000
- name: test
num_bytes: 147148
num_examples: 1000
download_size: 708795
dataset_size: 3259651
- config_name: PAN-X.my
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 32715
num_examples: 100
- name: validation
num_bytes: 40408
num_examples: 100
- name: test
num_bytes: 37346
num_examples: 100
download_size: 39008
dataset_size: 110469
- config_name: PAN-X.nl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4062149
num_examples: 20000
- name: validation
num_bytes: 2016836
num_examples: 10000
- name: test
num_bytes: 2038618
num_examples: 10000
download_size: 1943893
dataset_size: 8117603
- config_name: PAN-X.pt
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3149243
num_examples: 20000
- name: validation
num_bytes: 1575121
num_examples: 10000
- name: test
num_bytes: 1562605
num_examples: 10000
download_size: 1540478
dataset_size: 6286969
- config_name: PAN-X.ru
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4121751
num_examples: 20000
- name: validation
num_bytes: 2053149
num_examples: 10000
- name: test
num_bytes: 2074125
num_examples: 10000
download_size: 2127730
dataset_size: 8249025
- config_name: PAN-X.sw
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 135891
num_examples: 1000
- name: validation
num_bytes: 136348
num_examples: 1000
- name: test
num_bytes: 140211
num_examples: 1000
download_size: 87435
dataset_size: 412450
- config_name: PAN-X.ta
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4122090
num_examples: 15000
- name: validation
num_bytes: 277605
num_examples: 1000
- name: test
num_bytes: 278094
num_examples: 1000
download_size: 1044729
dataset_size: 4677789
- config_name: PAN-X.te
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 295390
num_examples: 1000
- name: validation
num_bytes: 293261
num_examples: 1000
- name: test
num_bytes: 296943
num_examples: 1000
download_size: 200516
dataset_size: 885594
- config_name: PAN-X.th
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 27132989
num_examples: 20000
- name: validation
num_bytes: 13262717
num_examples: 10000
- name: test
num_bytes: 13586908
num_examples: 10000
download_size: 2569566
dataset_size: 53982614
- config_name: PAN-X.tl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1168697
num_examples: 10000
- name: validation
num_bytes: 114136
num_examples: 1000
- name: test
num_bytes: 117884
num_examples: 1000
download_size: 308160
dataset_size: 1400717
- config_name: PAN-X.tr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3779130
num_examples: 20000
- name: validation
num_bytes: 1915332
num_examples: 10000
- name: test
num_bytes: 1911483
num_examples: 10000
download_size: 2000699
dataset_size: 7605945
- config_name: PAN-X.ur
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3072236
num_examples: 20000
- name: validation
num_bytes: 152128
num_examples: 1000
- name: test
num_bytes: 151902
num_examples: 1000
download_size: 610869
dataset_size: 3376266
- config_name: PAN-X.vi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3153187
num_examples: 20000
- name: validation
num_bytes: 1565123
num_examples: 10000
- name: test
num_bytes: 1580196
num_examples: 10000
download_size: 1375631
dataset_size: 6298506
- config_name: PAN-X.yo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 14689
num_examples: 100
- name: validation
num_bytes: 13225
num_examples: 100
- name: test
num_bytes: 13513
num_examples: 100
download_size: 17337
dataset_size: 41427
- config_name: PAN-X.zh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 8832011
num_examples: 20000
- name: validation
num_bytes: 4491305
num_examples: 10000
- name: test
num_bytes: 4363152
num_examples: 10000
download_size: 2083198
dataset_size: 17686468
- config_name: PAWS-X.de
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12451823
num_examples: 49380
- name: validation
num_bytes: 499997
num_examples: 2000
- name: test
num_bytes: 510182
num_examples: 2000
download_size: 9294034
dataset_size: 13462002
- config_name: PAWS-X.en
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 11827659
num_examples: 49175
- name: validation
num_bytes: 478279
num_examples: 2000
- name: test
num_bytes: 480726
num_examples: 2000
download_size: 8717639
dataset_size: 12786664
- config_name: PAWS-X.es
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12462047
num_examples: 49401
- name: validation
num_bytes: 494057
num_examples: 1961
- name: test
num_bytes: 505035
num_examples: 2000
download_size: 9229918
dataset_size: 13461139
- config_name: PAWS-X.fr
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12948452
num_examples: 49399
- name: validation
num_bytes: 516099
num_examples: 1988
- name: test
num_bytes: 521019
num_examples: 2000
download_size: 9464987
dataset_size: 13985570
- config_name: PAWS-X.ja
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 14695593
num_examples: 49401
- name: validation
num_bytes: 647762
num_examples: 2000
- name: test
num_bytes: 654628
num_examples: 2000
download_size: 10136228
dataset_size: 15997983
- config_name: PAWS-X.ko
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 13542597
num_examples: 49164
- name: validation
num_bytes: 540775
num_examples: 2000
- name: test
num_bytes: 547966
num_examples: 1999
download_size: 9926292
dataset_size: 14631338
- config_name: PAWS-X.zh
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 10469652
num_examples: 49401
- name: validation
num_bytes: 459108
num_examples: 2000
- name: test
num_bytes: 460626
num_examples: 2000
download_size: 8878855
dataset_size: 11389386
- config_name: SQuAD
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 79316858
num_examples: 87599
- name: validation
num_bytes: 10472597
num_examples: 10570
download_size: 16272656
dataset_size: 89789455
- config_name: XNLI
features:
- name: language
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: gold_label
dtype: string
splits:
- name: test
num_bytes: 20359372
num_examples: 75150
- name: validation
num_bytes: 10049239
num_examples: 37350
download_size: 8881623
dataset_size: 30408611
- config_name: XQuAD.ar
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1722775
num_examples: 1190
download_size: 263032
dataset_size: 1722775
- config_name: XQuAD.de
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1283277
num_examples: 1190
download_size: 241987
dataset_size: 1283277
- config_name: XQuAD.el
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2206666
num_examples: 1190
download_size: 324409
dataset_size: 2206666
- config_name: XQuAD.en
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1116099
num_examples: 1190
download_size: 212402
dataset_size: 1116099
- config_name: XQuAD.es
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1273475
num_examples: 1190
download_size: 236904
dataset_size: 1273475
- config_name: XQuAD.hi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2682951
num_examples: 1190
download_size: 322113
dataset_size: 2682951
- config_name: XQuAD.ru
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2136966
num_examples: 1190
download_size: 321758
dataset_size: 2136966
- config_name: XQuAD.th
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2854935
num_examples: 1190
download_size: 337337
dataset_size: 2854935
- config_name: XQuAD.tr
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1210739
num_examples: 1190
download_size: 228394
dataset_size: 1210739
- config_name: XQuAD.vi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1477215
num_examples: 1190
download_size: 237674
dataset_size: 1477215
- config_name: XQuAD.zh
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 984217
num_examples: 1190
download_size: 205798
dataset_size: 984217
- config_name: bucc18.de
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 248691
num_examples: 1038
- name: test
num_bytes: 2325685
num_examples: 9580
download_size: 1636130
dataset_size: 2574376
- config_name: bucc18.fr
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 212497
num_examples: 929
- name: test
num_bytes: 2082403
num_examples: 9086
download_size: 1437096
dataset_size: 2294900
- config_name: bucc18.ru
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 761331
num_examples: 2374
- name: test
num_bytes: 4641646
num_examples: 14435
download_size: 3074476
dataset_size: 5402977
- config_name: bucc18.zh
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 55723
num_examples: 257
- name: test
num_bytes: 415909
num_examples: 1899
download_size: 320378
dataset_size: 471632
- config_name: tatoeba.afr
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 250635
num_examples: 1000
download_size: 47676
dataset_size: 250635
- config_name: tatoeba.ara
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 263650
num_examples: 1000
download_size: 51228
dataset_size: 263650
- config_name: tatoeba.ben
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 282703
num_examples: 1000
download_size: 51362
dataset_size: 282703
- config_name: tatoeba.bul
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 293279
num_examples: 1000
download_size: 62454
dataset_size: 293279
- config_name: tatoeba.cmn
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 259931
num_examples: 1000
download_size: 58281
dataset_size: 259931
- config_name: tatoeba.deu
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 296567
num_examples: 1000
download_size: 79066
dataset_size: 296567
- config_name: tatoeba.ell
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 269961
num_examples: 1000
download_size: 52251
dataset_size: 269961
- config_name: tatoeba.est
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 250728
num_examples: 1000
download_size: 49968
dataset_size: 250728
- config_name: tatoeba.eus
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 257068
num_examples: 1000
download_size: 54271
dataset_size: 257068
- config_name: tatoeba.fin
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 266669
num_examples: 1000
download_size: 60580
dataset_size: 266669
- config_name: tatoeba.fra
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 271018
num_examples: 1000
download_size: 60925
dataset_size: 271018
- config_name: tatoeba.heb
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 274500
num_examples: 1000
download_size: 57306
dataset_size: 274500
- config_name: tatoeba.hin
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 313558
num_examples: 1000
download_size: 68816
dataset_size: 313558
- config_name: tatoeba.hun
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 259889
num_examples: 1000
download_size: 58096
dataset_size: 259889
- config_name: tatoeba.ind
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 265844
num_examples: 1000
download_size: 57047
dataset_size: 265844
- config_name: tatoeba.ita
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 256833
num_examples: 1000
download_size: 52422
dataset_size: 256833
- config_name: tatoeba.jav
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 53068
num_examples: 205
download_size: 15208
dataset_size: 53068
- config_name: tatoeba.jpn
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 284083
num_examples: 1000
download_size: 66620
dataset_size: 284083
- config_name: tatoeba.kat
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 214646
num_examples: 746
download_size: 41759
dataset_size: 214646
- config_name: tatoeba.kaz
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 157003
num_examples: 575
download_size: 35693
dataset_size: 157003
- config_name: tatoeba.kor
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 270139
num_examples: 1000
download_size: 61210
dataset_size: 270139
- config_name: tatoeba.mal
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 225934
num_examples: 687
download_size: 51077
dataset_size: 225934
- config_name: tatoeba.mar
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 291542
num_examples: 1000
download_size: 56575
dataset_size: 291542
- config_name: tatoeba.nld
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 264263
num_examples: 1000
download_size: 59774
dataset_size: 264263
- config_name: tatoeba.pes
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 284719
num_examples: 1000
download_size: 64642
dataset_size: 284719
- config_name: tatoeba.por
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 266185
num_examples: 1000
download_size: 58250
dataset_size: 266185
- config_name: tatoeba.rus
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 283472
num_examples: 1000
download_size: 61601
dataset_size: 283472
- config_name: tatoeba.spa
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 263266
num_examples: 1000
download_size: 57055
dataset_size: 263266
- config_name: tatoeba.swh
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 94957
num_examples: 390
download_size: 19362
dataset_size: 94957
- config_name: tatoeba.tam
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 98078
num_examples: 307
download_size: 23648
dataset_size: 98078
- config_name: tatoeba.tel
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 69837
num_examples: 234
download_size: 18260
dataset_size: 69837
- config_name: tatoeba.tgl
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 259138
num_examples: 1000
download_size: 53699
dataset_size: 259138
- config_name: tatoeba.tha
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 167866
num_examples: 548
download_size: 39659
dataset_size: 167866
- config_name: tatoeba.tur
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 262885
num_examples: 1000
download_size: 54137
dataset_size: 262885
- config_name: tatoeba.urd
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 279712
num_examples: 1000
download_size: 60399
dataset_size: 279712
- config_name: tatoeba.vie
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 282407
num_examples: 1000
download_size: 66746
dataset_size: 282407
- config_name: tydiqa
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 52948467
num_examples: 49881
- name: validation
num_bytes: 5006433
num_examples: 5077
download_size: 29402238
dataset_size: 57954900
- config_name: udpos.Afrikaans
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 586370
num_examples: 1315
- name: validation
num_bytes: 91290
num_examples: 194
- name: test
num_bytes: 174244
num_examples: 425
download_size: 193788
dataset_size: 851904
- config_name: udpos.Arabic
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 4453682
num_examples: 6075
- name: validation
num_bytes: 593650
num_examples: 909
- name: test
num_bytes: 973822
num_examples: 1680
download_size: 1186113
dataset_size: 6021154
- config_name: udpos.Basque
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 1327713
num_examples: 5396
- name: validation
num_bytes: 438671
num_examples: 1798
- name: test
num_bytes: 444644
num_examples: 1799
download_size: 703094
dataset_size: 2211028
- config_name: udpos.Bulgarian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2689767
num_examples: 8907
- name: validation
num_bytes: 347117
num_examples: 1115
- name: test
num_bytes: 339947
num_examples: 1116
download_size: 926186
dataset_size: 3376831
- config_name: udpos.Chinese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 4218891
num_examples: 18998
- name: validation
num_bytes: 594448
num_examples: 3038
- name: test
num_bytes: 1236051
num_examples: 5528
download_size: 1471747
dataset_size: 6049390
- config_name: udpos.Dutch
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 4517994
num_examples: 18051
- name: validation
num_bytes: 393592
num_examples: 1394
- name: test
num_bytes: 397904
num_examples: 1471
download_size: 1410982
dataset_size: 5309490
- config_name: udpos.English
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 6225509
num_examples: 21253
- name: validation
num_bytes: 1042040
num_examples: 3974
- name: test
num_bytes: 1421148
num_examples: 5440
download_size: 2116535
dataset_size: 8688697
- config_name: udpos.Estonian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 6614893
num_examples: 25749
- name: validation
num_bytes: 814171
num_examples: 3125
- name: test
num_bytes: 1065701
num_examples: 3760
download_size: 2619121
dataset_size: 8494765
- config_name: udpos.Finnish
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 5613706
num_examples: 27198
- name: validation
num_bytes: 656646
num_examples: 3239
- name: test
num_bytes: 1025726
num_examples: 4422
download_size: 2503217
dataset_size: 7296078
- config_name: udpos.French
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 10118933
num_examples: 47308
- name: validation
num_bytes: 1294096
num_examples: 5979
- name: test
num_bytes: 1731049
num_examples: 9465
download_size: 3378680
dataset_size: 13144078
- config_name: udpos.German
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 54773777
num_examples: 166849
- name: validation
num_bytes: 6044838
num_examples: 19233
- name: test
num_bytes: 7345863
num_examples: 22458
download_size: 18623155
dataset_size: 68164478
- config_name: udpos.Greek
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 8932104
num_examples: 28152
- name: validation
num_bytes: 1062447
num_examples: 2559
- name: test
num_bytes: 1028665
num_examples: 2809
download_size: 2763293
dataset_size: 11023216
- config_name: udpos.Hebrew
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2505691
num_examples: 5241
- name: validation
num_bytes: 210013
num_examples: 484
- name: test
num_bytes: 223865
num_examples: 491
download_size: 624771
dataset_size: 2939569
- config_name: udpos.Hindi
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 6690250
num_examples: 13304
- name: validation
num_bytes: 839702
num_examples: 1659
- name: test
num_bytes: 1400225
num_examples: 2684
download_size: 1468314
dataset_size: 8930177
- config_name: udpos.Hungarian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 372226
num_examples: 910
- name: validation
num_bytes: 215879
num_examples: 441
- name: test
num_bytes: 193728
num_examples: 449
download_size: 251882
dataset_size: 781833
- config_name: udpos.Indonesian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 1710678
num_examples: 4477
- name: validation
num_bytes: 220863
num_examples: 559
- name: test
num_bytes: 557101
num_examples: 1557
download_size: 684225
dataset_size: 2488642
- config_name: udpos.Italian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 11299293
num_examples: 29685
- name: validation
num_bytes: 988996
num_examples: 2278
- name: test
num_bytes: 1337869
num_examples: 3518
download_size: 3256246
dataset_size: 13626158
- config_name: udpos.Japanese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2792951
num_examples: 7125
- name: validation
num_bytes: 200356
num_examples: 511
- name: test
num_bytes: 928902
num_examples: 2372
download_size: 1012282
dataset_size: 3922209
- config_name: udpos.Kazakh
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 11438
num_examples: 31
- name: test
num_bytes: 228924
num_examples: 1047
download_size: 76300
dataset_size: 240362
- config_name: udpos.Korean
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 7341267
num_examples: 27410
- name: validation
num_bytes: 782587
num_examples: 3016
- name: test
num_bytes: 1162539
num_examples: 4276
download_size: 3115101
dataset_size: 9286393
- config_name: udpos.Marathi
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 59023
num_examples: 373
- name: validation
num_bytes: 8497
num_examples: 46
- name: test
num_bytes: 7871
num_examples: 47
download_size: 22133
dataset_size: 75391
- config_name: udpos.Persian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2400776
num_examples: 4798
- name: validation
num_bytes: 317053
num_examples: 599
- name: test
num_bytes: 320683
num_examples: 600
download_size: 606912
dataset_size: 3038512
- config_name: udpos.Portuguese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 7669556
num_examples: 17992
- name: validation
num_bytes: 712397
num_examples: 1770
- name: test
num_bytes: 1082582
num_examples: 2681
download_size: 2505672
dataset_size: 9464535
- config_name: udpos.Russian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 24230098
num_examples: 67435
- name: validation
num_bytes: 3457031
num_examples: 9960
- name: test
num_bytes: 4236693
num_examples: 11336
download_size: 8818512
dataset_size: 31923822
- config_name: udpos.Spanish
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 13858406
num_examples: 28492
- name: validation
num_bytes: 1498765
num_examples: 3054
- name: test
num_bytes: 1476500
num_examples: 3147
download_size: 4347905
dataset_size: 16833671
- config_name: udpos.Tagalog
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: test
num_bytes: 5153
num_examples: 55
download_size: 3345
dataset_size: 5153
- config_name: udpos.Tamil
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 202596
num_examples: 400
- name: validation
num_bytes: 40031
num_examples: 80
- name: test
num_bytes: 62366
num_examples: 120
download_size: 73764
dataset_size: 304993
- config_name: udpos.Telugu
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 138049
num_examples: 1051
- name: validation
num_bytes: 17990
num_examples: 131
- name: test
num_bytes: 19575
num_examples: 146
download_size: 46045
dataset_size: 175614
- config_name: udpos.Thai
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: test
num_bytes: 561336
num_examples: 1000
download_size: 92925
dataset_size: 561336
- config_name: udpos.Turkish
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 704405
num_examples: 3664
- name: validation
num_bytes: 186455
num_examples: 988
- name: test
num_bytes: 827382
num_examples: 4785
download_size: 581177
dataset_size: 1718242
- config_name: udpos.Urdu
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2107362
num_examples: 4043
- name: validation
num_bytes: 284261
num_examples: 552
- name: test
num_bytes: 288553
num_examples: 535
download_size: 499594
dataset_size: 2680176
- config_name: udpos.Vietnamese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 367335
num_examples: 1400
- name: validation
num_bytes: 206188
num_examples: 800
- name: test
num_bytes: 214063
num_examples: 800
download_size: 181239
dataset_size: 787586
- config_name: udpos.Yoruba
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: test
num_bytes: 44656
num_examples: 100
download_size: 10151
dataset_size: 44656
configs:
- config_name: MLQA.ar.ar
data_files:
- split: test
path: MLQA.ar.ar/test-*
- split: validation
path: MLQA.ar.ar/validation-*
- config_name: MLQA.ar.de
data_files:
- split: test
path: MLQA.ar.de/test-*
- split: validation
path: MLQA.ar.de/validation-*
- config_name: MLQA.ar.en
data_files:
- split: test
path: MLQA.ar.en/test-*
- split: validation
path: MLQA.ar.en/validation-*
- config_name: MLQA.ar.es
data_files:
- split: test
path: MLQA.ar.es/test-*
- split: validation
path: MLQA.ar.es/validation-*
- config_name: MLQA.ar.hi
data_files:
- split: test
path: MLQA.ar.hi/test-*
- split: validation
path: MLQA.ar.hi/validation-*
- config_name: MLQA.ar.vi
data_files:
- split: test
path: MLQA.ar.vi/test-*
- split: validation
path: MLQA.ar.vi/validation-*
- config_name: MLQA.ar.zh
data_files:
- split: test
path: MLQA.ar.zh/test-*
- split: validation
path: MLQA.ar.zh/validation-*
- config_name: MLQA.de.ar
data_files:
- split: test
path: MLQA.de.ar/test-*
- split: validation
path: MLQA.de.ar/validation-*
- config_name: MLQA.de.de
data_files:
- split: test
path: MLQA.de.de/test-*
- split: validation
path: MLQA.de.de/validation-*
- config_name: MLQA.de.en
data_files:
- split: test
path: MLQA.de.en/test-*
- split: validation
path: MLQA.de.en/validation-*
- config_name: MLQA.de.es
data_files:
- split: test
path: MLQA.de.es/test-*
- split: validation
path: MLQA.de.es/validation-*
- config_name: MLQA.de.hi
data_files:
- split: test
path: MLQA.de.hi/test-*
- split: validation
path: MLQA.de.hi/validation-*
- config_name: MLQA.de.vi
data_files:
- split: test
path: MLQA.de.vi/test-*
- split: validation
path: MLQA.de.vi/validation-*
- config_name: MLQA.de.zh
data_files:
- split: test
path: MLQA.de.zh/test-*
- split: validation
path: MLQA.de.zh/validation-*
- config_name: MLQA.en.ar
data_files:
- split: test
path: MLQA.en.ar/test-*
- split: validation
path: MLQA.en.ar/validation-*
- config_name: MLQA.en.de
data_files:
- split: test
path: MLQA.en.de/test-*
- split: validation
path: MLQA.en.de/validation-*
- config_name: MLQA.en.en
data_files:
- split: test
path: MLQA.en.en/test-*
- split: validation
path: MLQA.en.en/validation-*
- config_name: MLQA.en.es
data_files:
- split: test
path: MLQA.en.es/test-*
- split: validation
path: MLQA.en.es/validation-*
- config_name: MLQA.en.hi
data_files:
- split: test
path: MLQA.en.hi/test-*
- split: validation
path: MLQA.en.hi/validation-*
- config_name: MLQA.en.vi
data_files:
- split: test
path: MLQA.en.vi/test-*
- split: validation
path: MLQA.en.vi/validation-*
- config_name: MLQA.en.zh
data_files:
- split: test
path: MLQA.en.zh/test-*
- split: validation
path: MLQA.en.zh/validation-*
- config_name: MLQA.es.ar
data_files:
- split: test
path: MLQA.es.ar/test-*
- split: validation
path: MLQA.es.ar/validation-*
- config_name: MLQA.es.de
data_files:
- split: test
path: MLQA.es.de/test-*
- split: validation
path: MLQA.es.de/validation-*
- config_name: MLQA.es.en
data_files:
- split: test
path: MLQA.es.en/test-*
- split: validation
path: MLQA.es.en/validation-*
- config_name: MLQA.es.es
data_files:
- split: test
path: MLQA.es.es/test-*
- split: validation
path: MLQA.es.es/validation-*
- config_name: MLQA.es.hi
data_files:
- split: test
path: MLQA.es.hi/test-*
- split: validation
path: MLQA.es.hi/validation-*
- config_name: MLQA.es.vi
data_files:
- split: test
path: MLQA.es.vi/test-*
- split: validation
path: MLQA.es.vi/validation-*
- config_name: MLQA.es.zh
data_files:
- split: test
path: MLQA.es.zh/test-*
- split: validation
path: MLQA.es.zh/validation-*
- config_name: MLQA.hi.ar
data_files:
- split: test
path: MLQA.hi.ar/test-*
- split: validation
path: MLQA.hi.ar/validation-*
- config_name: MLQA.hi.de
data_files:
- split: test
path: MLQA.hi.de/test-*
- split: validation
path: MLQA.hi.de/validation-*
- config_name: MLQA.hi.en
data_files:
- split: test
path: MLQA.hi.en/test-*
- split: validation
path: MLQA.hi.en/validation-*
- config_name: MLQA.hi.es
data_files:
- split: test
path: MLQA.hi.es/test-*
- split: validation
path: MLQA.hi.es/validation-*
- config_name: MLQA.hi.hi
data_files:
- split: test
path: MLQA.hi.hi/test-*
- split: validation
path: MLQA.hi.hi/validation-*
- config_name: MLQA.hi.vi
data_files:
- split: test
path: MLQA.hi.vi/test-*
- split: validation
path: MLQA.hi.vi/validation-*
- config_name: MLQA.hi.zh
data_files:
- split: test
path: MLQA.hi.zh/test-*
- split: validation
path: MLQA.hi.zh/validation-*
- config_name: MLQA.vi.ar
data_files:
- split: test
path: MLQA.vi.ar/test-*
- split: validation
path: MLQA.vi.ar/validation-*
- config_name: MLQA.vi.de
data_files:
- split: test
path: MLQA.vi.de/test-*
- split: validation
path: MLQA.vi.de/validation-*
- config_name: MLQA.vi.en
data_files:
- split: test
path: MLQA.vi.en/test-*
- split: validation
path: MLQA.vi.en/validation-*
- config_name: MLQA.vi.es
data_files:
- split: test
path: MLQA.vi.es/test-*
- split: validation
path: MLQA.vi.es/validation-*
- config_name: MLQA.vi.hi
data_files:
- split: test
path: MLQA.vi.hi/test-*
- split: validation
path: MLQA.vi.hi/validation-*
- config_name: MLQA.vi.vi
data_files:
- split: test
path: MLQA.vi.vi/test-*
- split: validation
path: MLQA.vi.vi/validation-*
- config_name: MLQA.vi.zh
data_files:
- split: test
path: MLQA.vi.zh/test-*
- split: validation
path: MLQA.vi.zh/validation-*
- config_name: MLQA.zh.ar
data_files:
- split: test
path: MLQA.zh.ar/test-*
- split: validation
path: MLQA.zh.ar/validation-*
- config_name: MLQA.zh.de
data_files:
- split: test
path: MLQA.zh.de/test-*
- split: validation
path: MLQA.zh.de/validation-*
- config_name: MLQA.zh.en
data_files:
- split: test
path: MLQA.zh.en/test-*
- split: validation
path: MLQA.zh.en/validation-*
- config_name: MLQA.zh.es
data_files:
- split: test
path: MLQA.zh.es/test-*
- split: validation
path: MLQA.zh.es/validation-*
- config_name: MLQA.zh.hi
data_files:
- split: test
path: MLQA.zh.hi/test-*
- split: validation
path: MLQA.zh.hi/validation-*
- config_name: MLQA.zh.vi
data_files:
- split: test
path: MLQA.zh.vi/test-*
- split: validation
path: MLQA.zh.vi/validation-*
- config_name: MLQA.zh.zh
data_files:
- split: test
path: MLQA.zh.zh/test-*
- split: validation
path: MLQA.zh.zh/validation-*
- config_name: PAN-X.af
data_files:
- split: train
path: PAN-X.af/train-*
- split: validation
path: PAN-X.af/validation-*
- split: test
path: PAN-X.af/test-*
- config_name: PAN-X.ar
data_files:
- split: train
path: PAN-X.ar/train-*
- split: validation
path: PAN-X.ar/validation-*
- split: test
path: PAN-X.ar/test-*
- config_name: PAN-X.bg
data_files:
- split: train
path: PAN-X.bg/train-*
- split: validation
path: PAN-X.bg/validation-*
- split: test
path: PAN-X.bg/test-*
- config_name: PAN-X.bn
data_files:
- split: train
path: PAN-X.bn/train-*
- split: validation
path: PAN-X.bn/validation-*
- split: test
path: PAN-X.bn/test-*
- config_name: PAN-X.de
data_files:
- split: train
path: PAN-X.de/train-*
- split: validation
path: PAN-X.de/validation-*
- split: test
path: PAN-X.de/test-*
- config_name: PAN-X.el
data_files:
- split: train
path: PAN-X.el/train-*
- split: validation
path: PAN-X.el/validation-*
- split: test
path: PAN-X.el/test-*
- config_name: PAN-X.en
data_files:
- split: train
path: PAN-X.en/train-*
- split: validation
path: PAN-X.en/validation-*
- split: test
path: PAN-X.en/test-*
- config_name: PAN-X.es
data_files:
- split: train
path: PAN-X.es/train-*
- split: validation
path: PAN-X.es/validation-*
- split: test
path: PAN-X.es/test-*
- config_name: PAN-X.et
data_files:
- split: train
path: PAN-X.et/train-*
- split: validation
path: PAN-X.et/validation-*
- split: test
path: PAN-X.et/test-*
- config_name: PAN-X.eu
data_files:
- split: train
path: PAN-X.eu/train-*
- split: validation
path: PAN-X.eu/validation-*
- split: test
path: PAN-X.eu/test-*
- config_name: PAN-X.fa
data_files:
- split: train
path: PAN-X.fa/train-*
- split: validation
path: PAN-X.fa/validation-*
- split: test
path: PAN-X.fa/test-*
- config_name: PAN-X.fi
data_files:
- split: train
path: PAN-X.fi/train-*
- split: validation
path: PAN-X.fi/validation-*
- split: test
path: PAN-X.fi/test-*
- config_name: PAN-X.fr
data_files:
- split: train
path: PAN-X.fr/train-*
- split: validation
path: PAN-X.fr/validation-*
- split: test
path: PAN-X.fr/test-*
- config_name: PAN-X.he
data_files:
- split: train
path: PAN-X.he/train-*
- split: validation
path: PAN-X.he/validation-*
- split: test
path: PAN-X.he/test-*
- config_name: PAN-X.hi
data_files:
- split: train
path: PAN-X.hi/train-*
- split: validation
path: PAN-X.hi/validation-*
- split: test
path: PAN-X.hi/test-*
- config_name: PAN-X.hu
data_files:
- split: train
path: PAN-X.hu/train-*
- split: validation
path: PAN-X.hu/validation-*
- split: test
path: PAN-X.hu/test-*
- config_name: PAN-X.id
data_files:
- split: train
path: PAN-X.id/train-*
- split: validation
path: PAN-X.id/validation-*
- split: test
path: PAN-X.id/test-*
- config_name: PAN-X.it
data_files:
- split: train
path: PAN-X.it/train-*
- split: validation
path: PAN-X.it/validation-*
- split: test
path: PAN-X.it/test-*
- config_name: PAN-X.ja
data_files:
- split: train
path: PAN-X.ja/train-*
- split: validation
path: PAN-X.ja/validation-*
- split: test
path: PAN-X.ja/test-*
- config_name: PAN-X.jv
data_files:
- split: train
path: PAN-X.jv/train-*
- split: validation
path: PAN-X.jv/validation-*
- split: test
path: PAN-X.jv/test-*
- config_name: PAN-X.ka
data_files:
- split: train
path: PAN-X.ka/train-*
- split: validation
path: PAN-X.ka/validation-*
- split: test
path: PAN-X.ka/test-*
- config_name: PAN-X.kk
data_files:
- split: train
path: PAN-X.kk/train-*
- split: validation
path: PAN-X.kk/validation-*
- split: test
path: PAN-X.kk/test-*
- config_name: PAN-X.ko
data_files:
- split: train
path: PAN-X.ko/train-*
- split: validation
path: PAN-X.ko/validation-*
- split: test
path: PAN-X.ko/test-*
- config_name: PAN-X.ml
data_files:
- split: train
path: PAN-X.ml/train-*
- split: validation
path: PAN-X.ml/validation-*
- split: test
path: PAN-X.ml/test-*
- config_name: PAN-X.mr
data_files:
- split: train
path: PAN-X.mr/train-*
- split: validation
path: PAN-X.mr/validation-*
- split: test
path: PAN-X.mr/test-*
- config_name: PAN-X.ms
data_files:
- split: train
path: PAN-X.ms/train-*
- split: validation
path: PAN-X.ms/validation-*
- split: test
path: PAN-X.ms/test-*
- config_name: PAN-X.my
data_files:
- split: train
path: PAN-X.my/train-*
- split: validation
path: PAN-X.my/validation-*
- split: test
path: PAN-X.my/test-*
- config_name: PAN-X.nl
data_files:
- split: train
path: PAN-X.nl/train-*
- split: validation
path: PAN-X.nl/validation-*
- split: test
path: PAN-X.nl/test-*
- config_name: PAN-X.pt
data_files:
- split: train
path: PAN-X.pt/train-*
- split: validation
path: PAN-X.pt/validation-*
- split: test
path: PAN-X.pt/test-*
- config_name: PAN-X.ru
data_files:
- split: train
path: PAN-X.ru/train-*
- split: validation
path: PAN-X.ru/validation-*
- split: test
path: PAN-X.ru/test-*
- config_name: PAN-X.sw
data_files:
- split: train
path: PAN-X.sw/train-*
- split: validation
path: PAN-X.sw/validation-*
- split: test
path: PAN-X.sw/test-*
- config_name: PAN-X.ta
data_files:
- split: train
path: PAN-X.ta/train-*
- split: validation
path: PAN-X.ta/validation-*
- split: test
path: PAN-X.ta/test-*
- config_name: PAN-X.te
data_files:
- split: train
path: PAN-X.te/train-*
- split: validation
path: PAN-X.te/validation-*
- split: test
path: PAN-X.te/test-*
- config_name: PAN-X.th
data_files:
- split: train
path: PAN-X.th/train-*
- split: validation
path: PAN-X.th/validation-*
- split: test
path: PAN-X.th/test-*
- config_name: PAN-X.tl
data_files:
- split: train
path: PAN-X.tl/train-*
- split: validation
path: PAN-X.tl/validation-*
- split: test
path: PAN-X.tl/test-*
- config_name: PAN-X.tr
data_files:
- split: train
path: PAN-X.tr/train-*
- split: validation
path: PAN-X.tr/validation-*
- split: test
path: PAN-X.tr/test-*
- config_name: PAN-X.ur
data_files:
- split: train
path: PAN-X.ur/train-*
- split: validation
path: PAN-X.ur/validation-*
- split: test
path: PAN-X.ur/test-*
- config_name: PAN-X.vi
data_files:
- split: train
path: PAN-X.vi/train-*
- split: validation
path: PAN-X.vi/validation-*
- split: test
path: PAN-X.vi/test-*
- config_name: PAN-X.yo
data_files:
- split: train
path: PAN-X.yo/train-*
- split: validation
path: PAN-X.yo/validation-*
- split: test
path: PAN-X.yo/test-*
- config_name: PAN-X.zh
data_files:
- split: train
path: PAN-X.zh/train-*
- split: validation
path: PAN-X.zh/validation-*
- split: test
path: PAN-X.zh/test-*
- config_name: PAWS-X.de
data_files:
- split: train
path: PAWS-X.de/train-*
- split: validation
path: PAWS-X.de/validation-*
- split: test
path: PAWS-X.de/test-*
- config_name: PAWS-X.en
data_files:
- split: train
path: PAWS-X.en/train-*
- split: validation
path: PAWS-X.en/validation-*
- split: test
path: PAWS-X.en/test-*
- config_name: PAWS-X.es
data_files:
- split: train
path: PAWS-X.es/train-*
- split: validation
path: PAWS-X.es/validation-*
- split: test
path: PAWS-X.es/test-*
- config_name: PAWS-X.fr
data_files:
- split: train
path: PAWS-X.fr/train-*
- split: validation
path: PAWS-X.fr/validation-*
- split: test
path: PAWS-X.fr/test-*
- config_name: PAWS-X.ja
data_files:
- split: train
path: PAWS-X.ja/train-*
- split: validation
path: PAWS-X.ja/validation-*
- split: test
path: PAWS-X.ja/test-*
- config_name: PAWS-X.ko
data_files:
- split: train
path: PAWS-X.ko/train-*
- split: validation
path: PAWS-X.ko/validation-*
- split: test
path: PAWS-X.ko/test-*
- config_name: PAWS-X.zh
data_files:
- split: train
path: PAWS-X.zh/train-*
- split: validation
path: PAWS-X.zh/validation-*
- split: test
path: PAWS-X.zh/test-*
- config_name: SQuAD
data_files:
- split: train
path: SQuAD/train-*
- split: validation
path: SQuAD/validation-*
- config_name: XNLI
data_files:
- split: test
path: XNLI/test-*
- split: validation
path: XNLI/validation-*
- config_name: XQuAD.ar
data_files:
- split: validation
path: XQuAD.ar/validation-*
- config_name: XQuAD.de
data_files:
- split: validation
path: XQuAD.de/validation-*
- config_name: XQuAD.el
data_files:
- split: validation
path: XQuAD.el/validation-*
- config_name: XQuAD.en
data_files:
- split: validation
path: XQuAD.en/validation-*
- config_name: XQuAD.es
data_files:
- split: validation
path: XQuAD.es/validation-*
- config_name: XQuAD.hi
data_files:
- split: validation
path: XQuAD.hi/validation-*
- config_name: XQuAD.ru
data_files:
- split: validation
path: XQuAD.ru/validation-*
- config_name: XQuAD.th
data_files:
- split: validation
path: XQuAD.th/validation-*
- config_name: XQuAD.tr
data_files:
- split: validation
path: XQuAD.tr/validation-*
- config_name: XQuAD.vi
data_files:
- split: validation
path: XQuAD.vi/validation-*
- config_name: XQuAD.zh
data_files:
- split: validation
path: XQuAD.zh/validation-*
- config_name: bucc18.de
data_files:
- split: validation
path: bucc18.de/validation-*
- split: test
path: bucc18.de/test-*
- config_name: bucc18.fr
data_files:
- split: validation
path: bucc18.fr/validation-*
- split: test
path: bucc18.fr/test-*
- config_name: bucc18.ru
data_files:
- split: validation
path: bucc18.ru/validation-*
- split: test
path: bucc18.ru/test-*
- config_name: bucc18.zh
data_files:
- split: validation
path: bucc18.zh/validation-*
- split: test
path: bucc18.zh/test-*
- config_name: tatoeba.afr
data_files:
- split: validation
path: tatoeba.afr/validation-*
- config_name: tatoeba.ara
data_files:
- split: validation
path: tatoeba.ara/validation-*
- config_name: tatoeba.ben
data_files:
- split: validation
path: tatoeba.ben/validation-*
- config_name: tatoeba.bul
data_files:
- split: validation
path: tatoeba.bul/validation-*
- config_name: tatoeba.cmn
data_files:
- split: validation
path: tatoeba.cmn/validation-*
- config_name: tatoeba.deu
data_files:
- split: validation
path: tatoeba.deu/validation-*
- config_name: tatoeba.ell
data_files:
- split: validation
path: tatoeba.ell/validation-*
- config_name: tatoeba.est
data_files:
- split: validation
path: tatoeba.est/validation-*
- config_name: tatoeba.eus
data_files:
- split: validation
path: tatoeba.eus/validation-*
- config_name: tatoeba.fin
data_files:
- split: validation
path: tatoeba.fin/validation-*
- config_name: tatoeba.fra
data_files:
- split: validation
path: tatoeba.fra/validation-*
- config_name: tatoeba.heb
data_files:
- split: validation
path: tatoeba.heb/validation-*
- config_name: tatoeba.hin
data_files:
- split: validation
path: tatoeba.hin/validation-*
- config_name: tatoeba.hun
data_files:
- split: validation
path: tatoeba.hun/validation-*
- config_name: tatoeba.ind
data_files:
- split: validation
path: tatoeba.ind/validation-*
- config_name: tatoeba.ita
data_files:
- split: validation
path: tatoeba.ita/validation-*
- config_name: tatoeba.jav
data_files:
- split: validation
path: tatoeba.jav/validation-*
- config_name: tatoeba.jpn
data_files:
- split: validation
path: tatoeba.jpn/validation-*
- config_name: tatoeba.kat
data_files:
- split: validation
path: tatoeba.kat/validation-*
- config_name: tatoeba.kaz
data_files:
- split: validation
path: tatoeba.kaz/validation-*
- config_name: tatoeba.kor
data_files:
- split: validation
path: tatoeba.kor/validation-*
- config_name: tatoeba.mal
data_files:
- split: validation
path: tatoeba.mal/validation-*
- config_name: tatoeba.mar
data_files:
- split: validation
path: tatoeba.mar/validation-*
- config_name: tatoeba.nld
data_files:
- split: validation
path: tatoeba.nld/validation-*
- config_name: tatoeba.pes
data_files:
- split: validation
path: tatoeba.pes/validation-*
- config_name: tatoeba.por
data_files:
- split: validation
path: tatoeba.por/validation-*
- config_name: tatoeba.rus
data_files:
- split: validation
path: tatoeba.rus/validation-*
- config_name: tatoeba.spa
data_files:
- split: validation
path: tatoeba.spa/validation-*
- config_name: tatoeba.swh
data_files:
- split: validation
path: tatoeba.swh/validation-*
- config_name: tatoeba.tam
data_files:
- split: validation
path: tatoeba.tam/validation-*
- config_name: tatoeba.tel
data_files:
- split: validation
path: tatoeba.tel/validation-*
- config_name: tatoeba.tgl
data_files:
- split: validation
path: tatoeba.tgl/validation-*
- config_name: tatoeba.tha
data_files:
- split: validation
path: tatoeba.tha/validation-*
- config_name: tatoeba.tur
data_files:
- split: validation
path: tatoeba.tur/validation-*
- config_name: tatoeba.urd
data_files:
- split: validation
path: tatoeba.urd/validation-*
- config_name: tatoeba.vie
data_files:
- split: validation
path: tatoeba.vie/validation-*
- config_name: tydiqa
data_files:
- split: train
path: tydiqa/train-*
- split: validation
path: tydiqa/validation-*
- config_name: udpos.Afrikaans
data_files:
- split: train
path: udpos.Afrikaans/train-*
- split: validation
path: udpos.Afrikaans/validation-*
- split: test
path: udpos.Afrikaans/test-*
- config_name: udpos.Arabic
data_files:
- split: train
path: udpos.Arabic/train-*
- split: validation
path: udpos.Arabic/validation-*
- split: test
path: udpos.Arabic/test-*
- config_name: udpos.Basque
data_files:
- split: train
path: udpos.Basque/train-*
- split: validation
path: udpos.Basque/validation-*
- split: test
path: udpos.Basque/test-*
- config_name: udpos.Bulgarian
data_files:
- split: train
path: udpos.Bulgarian/train-*
- split: validation
path: udpos.Bulgarian/validation-*
- split: test
path: udpos.Bulgarian/test-*
- config_name: udpos.Chinese
data_files:
- split: train
path: udpos.Chinese/train-*
- split: validation
path: udpos.Chinese/validation-*
- split: test
path: udpos.Chinese/test-*
- config_name: udpos.Dutch
data_files:
- split: train
path: udpos.Dutch/train-*
- split: validation
path: udpos.Dutch/validation-*
- split: test
path: udpos.Dutch/test-*
- config_name: udpos.English
data_files:
- split: train
path: udpos.English/train-*
- split: validation
path: udpos.English/validation-*
- split: test
path: udpos.English/test-*
- config_name: udpos.Estonian
data_files:
- split: train
path: udpos.Estonian/train-*
- split: validation
path: udpos.Estonian/validation-*
- split: test
path: udpos.Estonian/test-*
- config_name: udpos.Finnish
data_files:
- split: train
path: udpos.Finnish/train-*
- split: validation
path: udpos.Finnish/validation-*
- split: test
path: udpos.Finnish/test-*
- config_name: udpos.French
data_files:
- split: train
path: udpos.French/train-*
- split: validation
path: udpos.French/validation-*
- split: test
path: udpos.French/test-*
- config_name: udpos.German
data_files:
- split: train
path: udpos.German/train-*
- split: validation
path: udpos.German/validation-*
- split: test
path: udpos.German/test-*
- config_name: udpos.Greek
data_files:
- split: train
path: udpos.Greek/train-*
- split: validation
path: udpos.Greek/validation-*
- split: test
path: udpos.Greek/test-*
- config_name: udpos.Hebrew
data_files:
- split: train
path: udpos.Hebrew/train-*
- split: validation
path: udpos.Hebrew/validation-*
- split: test
path: udpos.Hebrew/test-*
- config_name: udpos.Hindi
data_files:
- split: train
path: udpos.Hindi/train-*
- split: validation
path: udpos.Hindi/validation-*
- split: test
path: udpos.Hindi/test-*
- config_name: udpos.Hungarian
data_files:
- split: train
path: udpos.Hungarian/train-*
- split: validation
path: udpos.Hungarian/validation-*
- split: test
path: udpos.Hungarian/test-*
- config_name: udpos.Indonesian
data_files:
- split: train
path: udpos.Indonesian/train-*
- split: validation
path: udpos.Indonesian/validation-*
- split: test
path: udpos.Indonesian/test-*
- config_name: udpos.Italian
data_files:
- split: train
path: udpos.Italian/train-*
- split: validation
path: udpos.Italian/validation-*
- split: test
path: udpos.Italian/test-*
- config_name: udpos.Japanese
data_files:
- split: train
path: udpos.Japanese/train-*
- split: validation
path: udpos.Japanese/validation-*
- split: test
path: udpos.Japanese/test-*
- config_name: udpos.Kazakh
data_files:
- split: train
path: udpos.Kazakh/train-*
- split: test
path: udpos.Kazakh/test-*
- config_name: udpos.Korean
data_files:
- split: train
path: udpos.Korean/train-*
- split: validation
path: udpos.Korean/validation-*
- split: test
path: udpos.Korean/test-*
- config_name: udpos.Marathi
data_files:
- split: train
path: udpos.Marathi/train-*
- split: validation
path: udpos.Marathi/validation-*
- split: test
path: udpos.Marathi/test-*
- config_name: udpos.Persian
data_files:
- split: train
path: udpos.Persian/train-*
- split: validation
path: udpos.Persian/validation-*
- split: test
path: udpos.Persian/test-*
- config_name: udpos.Portuguese
data_files:
- split: train
path: udpos.Portuguese/train-*
- split: validation
path: udpos.Portuguese/validation-*
- split: test
path: udpos.Portuguese/test-*
- config_name: udpos.Russian
data_files:
- split: train
path: udpos.Russian/train-*
- split: validation
path: udpos.Russian/validation-*
- split: test
path: udpos.Russian/test-*
- config_name: udpos.Spanish
data_files:
- split: train
path: udpos.Spanish/train-*
- split: validation
path: udpos.Spanish/validation-*
- split: test
path: udpos.Spanish/test-*
- config_name: udpos.Tagalog
data_files:
- split: test
path: udpos.Tagalog/test-*
- config_name: udpos.Tamil
data_files:
- split: train
path: udpos.Tamil/train-*
- split: validation
path: udpos.Tamil/validation-*
- split: test
path: udpos.Tamil/test-*
- config_name: udpos.Telugu
data_files:
- split: train
path: udpos.Telugu/train-*
- split: validation
path: udpos.Telugu/validation-*
- split: test
path: udpos.Telugu/test-*
- config_name: udpos.Thai
data_files:
- split: test
path: udpos.Thai/test-*
- config_name: udpos.Turkish
data_files:
- split: train
path: udpos.Turkish/train-*
- split: validation
path: udpos.Turkish/validation-*
- split: test
path: udpos.Turkish/test-*
- config_name: udpos.Urdu
data_files:
- split: train
path: udpos.Urdu/train-*
- split: validation
path: udpos.Urdu/validation-*
- split: test
path: udpos.Urdu/test-*
- config_name: udpos.Vietnamese
data_files:
- split: train
path: udpos.Vietnamese/train-*
- split: validation
path: udpos.Vietnamese/validation-*
- split: test
path: udpos.Vietnamese/test-*
- config_name: udpos.Yoruba
data_files:
- split: test
path: udpos.Yoruba/test-*
---
# Dataset Card for "xtreme"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research/xtreme](https://github.com/google-research/xtreme)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 15.88 GB
- **Size of the generated dataset:** 1.08 GB
- **Total amount of disk used:** 16.96 GB
### Dataset Summary
The Cross-lingual Natural Language Inference (XNLI) corpus is a crowd-sourced collection of 5,000 test and
2,500 dev pairs for the MultiNLI corpus. The pairs are annotated with textual entailment and translated into
14 languages: French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese,
Hindi, Swahili and Urdu. This results in 112.5k annotated pairs. Each premise can be associated with the
corresponding hypothesis in the 15 languages, summing up to more than 1.5M combinations. The corpus is made to
evaluate how to perform inference in any language (including low-resources ones like Swahili or Urdu) when only
English NLI data is available at training time. One solution is cross-lingual sentence encoding, for which XNLI
is an evaluation benchmark.
The Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark is a benchmark for the evaluation of
the cross-lingual generalization ability of pre-trained multilingual models. It covers 40 typologically diverse languages
(spanning 12 language families) and includes nine tasks that collectively require reasoning about different levels of
syntax and semantics. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks,
and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil
(spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the
Niger-Congo languages Swahili and Yoruba, spoken in Africa.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### MLQA.ar.ar
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 9.20 MB
- **Total amount of disk used:** 84.91 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.de
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 2.55 MB
- **Total amount of disk used:** 78.27 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.en
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 9.04 MB
- **Total amount of disk used:** 84.76 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.es
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 3.27 MB
- **Total amount of disk used:** 78.99 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.hi
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 3.32 MB
- **Total amount of disk used:** 79.04 MB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### MLQA.ar.ar
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.de
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.en
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.es
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.hi
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
### Data Splits
| name |validation|test|
|----------|---------:|---:|
|MLQA.ar.ar| 517|5335|
|MLQA.ar.de| 207|1649|
|MLQA.ar.en| 517|5335|
|MLQA.ar.es| 161|1978|
|MLQA.ar.hi| 186|1831|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
@article{hu2020xtreme,
author = {Junjie Hu and Sebastian Ruder and Aditya Siddhant and Graham Neubig and Orhan Firat and Melvin Johnson},
title = {XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization},
journal = {CoRR},
volume = {abs/2003.11080},
year = {2020},
archivePrefix = {arXiv},
eprint = {2003.11080}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lvwerra](https://github.com/lvwerra), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
unimelb-nlp/wikiann | unimelb-nlp | 2024-02-22T14:32:02Z | 109,759 | 106 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:original",
"language:ace",
"language:af",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arc",
"language:arz",
"language:as",
"language:ast",
"language:ay",
"language:az",
"language:ba",
"language:bar",
"language:be",
"language:bg",
"language:bh",
"language:bn",
"language:bo",
"language:br",
"language:bs",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ckb",
"language:co",
"language:crh",
"language:cs",
"language:csb",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:diq",
"language:dv",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gan",
"language:gd",
"language:gl",
"language:gn",
"language:gu",
"language:hak",
"language:he",
"language:hi",
"language:hr",
"language:hsb",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ig",
"language:ilo",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jbo",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ksh",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:li",
"language:lij",
"language:lmo",
"language:ln",
"language:lt",
"language:lv",
"language:lzh",
"language:mg",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:mwl",
"language:my",
"language:mzn",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pdc",
"language:pl",
"language:pms",
"language:pnb",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sa",
"language:sah",
"language:scn",
"language:sco",
"language:sd",
"language:sgs",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wuu",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:zea",
"language:zh",
"license:unknown",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1902.00193",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- ace
- af
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- ay
- az
- ba
- bar
- be
- bg
- bh
- bn
- bo
- br
- bs
- ca
- cbk
- cdo
- ce
- ceb
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dv
- el
- eml
- en
- eo
- es
- et
- eu
- ext
- fa
- fi
- fo
- fr
- frr
- fur
- fy
- ga
- gan
- gd
- gl
- gn
- gu
- hak
- he
- hi
- hr
- hsb
- hu
- hy
- ia
- id
- ig
- ilo
- io
- is
- it
- ja
- jbo
- jv
- ka
- kk
- km
- kn
- ko
- ksh
- ku
- ky
- la
- lb
- li
- lij
- lmo
- ln
- lt
- lv
- lzh
- mg
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- ms
- mt
- mwl
- my
- mzn
- nan
- nap
- nds
- ne
- nl
- nn
- 'no'
- nov
- oc
- or
- os
- pa
- pdc
- pl
- pms
- pnb
- ps
- pt
- qu
- rm
- ro
- ru
- rw
- sa
- sah
- scn
- sco
- sd
- sgs
- sh
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- szl
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wuu
- xmf
- yi
- yo
- yue
- zea
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: wikiann-1
pretty_name: WikiANN
config_names:
- 'no'
- ace
- af
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- ay
- az
- ba
- bar
- be
- bg
- bh
- bn
- bo
- br
- bs
- ca
- cdo
- ce
- ceb
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dv
- el
- en
- eo
- es
- et
- eu
- ext
- fa
- fi
- fo
- fr
- frr
- fur
- fy
- ga
- gan
- gd
- gl
- gn
- gu
- hak
- he
- hi
- hr
- hsb
- hu
- hy
- ia
- id
- ig
- ilo
- io
- is
- it
- ja
- jbo
- jv
- ka
- kk
- km
- kn
- ko
- ksh
- ku
- ky
- la
- lb
- li
- lij
- lmo
- ln
- lt
- lv
- mg
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- ms
- mt
- mwl
- my
- mzn
- nap
- nds
- ne
- nl
- nn
- nov
- oc
- or
- os
- other-bat-smg
- other-be-x-old
- other-cbk-zam
- other-eml
- other-fiu-vro
- other-map-bms
- other-simple
- other-zh-classical
- other-zh-min-nan
- other-zh-yue
- pa
- pdc
- pl
- pms
- pnb
- ps
- pt
- qu
- rm
- ro
- ru
- rw
- sa
- sah
- scn
- sco
- sd
- sh
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- szl
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vec
- vep
- vi
- vls
- vo
- wa
- war
- wuu
- xmf
- yi
- yo
- zea
- zh
language_bcp47:
- be-tarask
- en-basiceng
- jv-x-bms
dataset_info:
- config_name: ace
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22425
num_examples: 100
- name: test
num_bytes: 25724
num_examples: 100
- name: train
num_bytes: 23203
num_examples: 100
download_size: 27835
dataset_size: 71352
- config_name: af
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 299109
num_examples: 1000
- name: test
num_bytes: 295821
num_examples: 1000
- name: train
num_bytes: 1521576
num_examples: 5000
download_size: 528580
dataset_size: 2116506
- config_name: als
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 34290
num_examples: 100
- name: test
num_bytes: 36317
num_examples: 100
- name: train
num_bytes: 34940
num_examples: 100
download_size: 40186
dataset_size: 105547
- config_name: am
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21401
num_examples: 100
- name: test
num_bytes: 23783
num_examples: 100
- name: train
num_bytes: 22186
num_examples: 100
download_size: 30287
dataset_size: 67370
- config_name: an
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 180581
num_examples: 1000
- name: test
num_bytes: 174964
num_examples: 1000
- name: train
num_bytes: 180939
num_examples: 1000
download_size: 128283
dataset_size: 536484
- config_name: ang
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21897
num_examples: 100
- name: test
num_bytes: 24495
num_examples: 100
- name: train
num_bytes: 23268
num_examples: 100
download_size: 30667
dataset_size: 69660
- config_name: ar
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2325660
num_examples: 10000
- name: test
num_bytes: 2334636
num_examples: 10000
- name: train
num_bytes: 4671613
num_examples: 20000
download_size: 2582112
dataset_size: 9331909
- config_name: arc
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 15698
num_examples: 100
- name: test
num_bytes: 16613
num_examples: 100
- name: train
num_bytes: 18508
num_examples: 100
download_size: 22858
dataset_size: 50819
- config_name: arz
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 26581
num_examples: 100
- name: test
num_bytes: 25635
num_examples: 100
- name: train
num_bytes: 26347
num_examples: 100
download_size: 32301
dataset_size: 78563
- config_name: as
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 25708
num_examples: 100
- name: test
num_bytes: 23322
num_examples: 100
- name: train
num_bytes: 24956
num_examples: 100
download_size: 30404
dataset_size: 73986
- config_name: ast
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 217449
num_examples: 1000
- name: test
num_bytes: 220846
num_examples: 1000
- name: train
num_bytes: 228210
num_examples: 1000
download_size: 157002
dataset_size: 666505
- config_name: ay
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 11656
num_examples: 100
- name: test
num_bytes: 13351
num_examples: 100
- name: train
num_bytes: 12568
num_examples: 100
download_size: 16901
dataset_size: 37575
- config_name: az
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 272038
num_examples: 1000
- name: test
num_bytes: 267907
num_examples: 1000
- name: train
num_bytes: 2645524
num_examples: 10000
download_size: 931014
dataset_size: 3185469
- config_name: ba
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 29234
num_examples: 100
- name: test
num_bytes: 30474
num_examples: 100
- name: train
num_bytes: 31095
num_examples: 100
download_size: 36848
dataset_size: 90803
- config_name: bar
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17346
num_examples: 100
- name: test
num_bytes: 17811
num_examples: 100
- name: train
num_bytes: 16768
num_examples: 100
download_size: 21987
dataset_size: 51925
- config_name: bat-smg
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 26468
num_examples: 100
- name: test
num_bytes: 26065
num_examples: 100
- name: train
num_bytes: 24649
num_examples: 100
download_size: 31533
dataset_size: 77182
- config_name: be
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 262014
num_examples: 1000
- name: test
num_bytes: 266076
num_examples: 1000
- name: train
num_bytes: 3983266
num_examples: 15000
download_size: 1283568
dataset_size: 4511356
- config_name: be-x-old
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 342626
num_examples: 1000
- name: test
num_bytes: 337571
num_examples: 1000
- name: train
num_bytes: 1704228
num_examples: 5000
download_size: 586037
dataset_size: 2384425
- config_name: bg
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2840879
num_examples: 10000
- name: test
num_bytes: 2830185
num_examples: 10000
- name: train
num_bytes: 5665007
num_examples: 20000
download_size: 3010319
dataset_size: 11336071
- config_name: bh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 33654
num_examples: 100
- name: test
num_bytes: 30664
num_examples: 100
- name: train
num_bytes: 36346
num_examples: 100
download_size: 34563
dataset_size: 100664
- config_name: bn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 238418
num_examples: 1000
- name: test
num_bytes: 237190
num_examples: 1000
- name: train
num_bytes: 2351563
num_examples: 10000
download_size: 667399
dataset_size: 2827171
- config_name: bo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22660
num_examples: 100
- name: test
num_bytes: 15409
num_examples: 100
- name: train
num_bytes: 14057
num_examples: 100
download_size: 26274
dataset_size: 52126
- config_name: br
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 206811
num_examples: 1000
- name: test
num_bytes: 222055
num_examples: 1000
- name: train
num_bytes: 221467
num_examples: 1000
download_size: 193001
dataset_size: 650333
- config_name: bs
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 246350
num_examples: 1000
- name: test
num_bytes: 247303
num_examples: 1000
- name: train
num_bytes: 3669290
num_examples: 15000
download_size: 1145992
dataset_size: 4162943
- config_name: ca
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1836291
num_examples: 10000
- name: test
num_bytes: 1847718
num_examples: 10000
- name: train
num_bytes: 3689286
num_examples: 20000
download_size: 2392551
dataset_size: 7373295
- config_name: cbk-zam
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 47032
num_examples: 100
- name: test
num_bytes: 47249
num_examples: 100
- name: train
num_bytes: 52517
num_examples: 100
download_size: 37209
dataset_size: 146798
- config_name: cdo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 37451
num_examples: 100
- name: test
num_bytes: 34291
num_examples: 100
- name: train
num_bytes: 36176
num_examples: 100
download_size: 34997
dataset_size: 107918
- config_name: ce
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 40275
num_examples: 100
- name: test
num_bytes: 38612
num_examples: 100
- name: train
num_bytes: 38256
num_examples: 100
download_size: 34386
dataset_size: 117143
- config_name: ceb
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22761
num_examples: 100
- name: test
num_bytes: 23922
num_examples: 100
- name: train
num_bytes: 21337
num_examples: 100
download_size: 27030
dataset_size: 68020
- config_name: ckb
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 214203
num_examples: 1000
- name: test
num_bytes: 211960
num_examples: 1000
- name: train
num_bytes: 217038
num_examples: 1000
download_size: 148534
dataset_size: 643201
- config_name: co
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 15940
num_examples: 100
- name: test
num_bytes: 15852
num_examples: 100
- name: train
num_bytes: 18004
num_examples: 100
download_size: 25539
dataset_size: 49796
- config_name: crh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 20202
num_examples: 100
- name: test
num_bytes: 23851
num_examples: 100
- name: train
num_bytes: 23308
num_examples: 100
download_size: 29468
dataset_size: 67361
- config_name: cs
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2456626
num_examples: 10000
- name: test
num_bytes: 2458127
num_examples: 10000
- name: train
num_bytes: 4944702
num_examples: 20000
download_size: 3028120
dataset_size: 9859455
- config_name: csb
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 28813
num_examples: 100
- name: test
num_bytes: 27812
num_examples: 100
- name: train
num_bytes: 31612
num_examples: 100
download_size: 35313
dataset_size: 88237
- config_name: cv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24759
num_examples: 100
- name: test
num_bytes: 26375
num_examples: 100
- name: train
num_bytes: 26928
num_examples: 100
download_size: 32018
dataset_size: 78062
- config_name: cy
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 228558
num_examples: 1000
- name: test
num_bytes: 233841
num_examples: 1000
- name: train
num_bytes: 2337088
num_examples: 10000
download_size: 630636
dataset_size: 2799487
- config_name: da
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2422948
num_examples: 10000
- name: test
num_bytes: 2432296
num_examples: 10000
- name: train
num_bytes: 4882166
num_examples: 20000
download_size: 2903455
dataset_size: 9737410
- config_name: de
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2754522
num_examples: 10000
- name: test
num_bytes: 2750968
num_examples: 10000
- name: train
num_bytes: 5510585
num_examples: 20000
download_size: 3340116
dataset_size: 11016075
- config_name: diq
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24119
num_examples: 100
- name: test
num_bytes: 22448
num_examples: 100
- name: train
num_bytes: 24103
num_examples: 100
download_size: 29511
dataset_size: 70670
- config_name: dv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 30294
num_examples: 100
- name: test
num_bytes: 27251
num_examples: 100
- name: train
num_bytes: 31005
num_examples: 100
download_size: 36181
dataset_size: 88550
- config_name: el
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 3027934
num_examples: 10000
- name: test
num_bytes: 3034301
num_examples: 10000
- name: train
num_bytes: 6046582
num_examples: 20000
download_size: 3212871
dataset_size: 12108817
- config_name: eml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 30022
num_examples: 100
- name: test
num_bytes: 35852
num_examples: 100
- name: train
num_bytes: 30764
num_examples: 100
download_size: 35629
dataset_size: 96638
- config_name: en
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2336325
num_examples: 10000
- name: test
num_bytes: 2330217
num_examples: 10000
- name: train
num_bytes: 4649545
num_examples: 20000
download_size: 2990984
dataset_size: 9316087
- config_name: eo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1968662
num_examples: 10000
- name: test
num_bytes: 1961458
num_examples: 10000
- name: train
num_bytes: 2952554
num_examples: 15000
download_size: 2147812
dataset_size: 6882674
- config_name: es
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1976907
num_examples: 10000
- name: test
num_bytes: 1986636
num_examples: 10000
- name: train
num_bytes: 3972236
num_examples: 20000
download_size: 2431958
dataset_size: 7935779
- config_name: et
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2403333
num_examples: 10000
- name: test
num_bytes: 2392396
num_examples: 10000
- name: train
num_bytes: 3579208
num_examples: 15000
download_size: 2678718
dataset_size: 8374937
- config_name: eu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2677008
num_examples: 10000
- name: test
num_bytes: 2628923
num_examples: 10000
- name: train
num_bytes: 2672325
num_examples: 10000
download_size: 1985966
dataset_size: 7978256
- config_name: ext
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 30793
num_examples: 100
- name: test
num_bytes: 29455
num_examples: 100
- name: train
num_bytes: 23082
num_examples: 100
download_size: 32111
dataset_size: 83330
- config_name: fa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2328612
num_examples: 10000
- name: test
num_bytes: 2314659
num_examples: 10000
- name: train
num_bytes: 4618042
num_examples: 20000
download_size: 2385463
dataset_size: 9261313
- config_name: fi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2500558
num_examples: 10000
- name: test
num_bytes: 2505133
num_examples: 10000
- name: train
num_bytes: 5020599
num_examples: 20000
download_size: 3407283
dataset_size: 10026290
- config_name: fiu-vro
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 27644
num_examples: 100
- name: test
num_bytes: 27700
num_examples: 100
- name: train
num_bytes: 28661
num_examples: 100
download_size: 31399
dataset_size: 84005
- config_name: fo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 26066
num_examples: 100
- name: test
num_bytes: 23503
num_examples: 100
- name: train
num_bytes: 26150
num_examples: 100
download_size: 33699
dataset_size: 75719
- config_name: fr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2057976
num_examples: 10000
- name: test
num_bytes: 2073565
num_examples: 10000
- name: train
num_bytes: 4123939
num_examples: 20000
download_size: 2694633
dataset_size: 8255480
- config_name: frr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 15855
num_examples: 100
- name: test
num_bytes: 15708
num_examples: 100
- name: train
num_bytes: 16626
num_examples: 100
download_size: 25130
dataset_size: 48189
- config_name: fur
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 25236
num_examples: 100
- name: test
num_bytes: 30534
num_examples: 100
- name: train
num_bytes: 33626
num_examples: 100
download_size: 32754
dataset_size: 89396
- config_name: fy
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 226408
num_examples: 1000
- name: test
num_bytes: 229672
num_examples: 1000
- name: train
num_bytes: 222985
num_examples: 1000
download_size: 182402
dataset_size: 679065
- config_name: ga
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 234064
num_examples: 1000
- name: test
num_bytes: 235055
num_examples: 1000
- name: train
num_bytes: 238019
num_examples: 1000
download_size: 198615
dataset_size: 707138
- config_name: gan
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17505
num_examples: 100
- name: test
num_bytes: 13851
num_examples: 100
- name: train
num_bytes: 14370
num_examples: 100
download_size: 28600
dataset_size: 45726
- config_name: gd
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 23202
num_examples: 100
- name: test
num_bytes: 20280
num_examples: 100
- name: train
num_bytes: 20126
num_examples: 100
download_size: 29305
dataset_size: 63608
- config_name: gl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2029655
num_examples: 10000
- name: test
num_bytes: 2031122
num_examples: 10000
- name: train
num_bytes: 3030937
num_examples: 15000
download_size: 2045672
dataset_size: 7091714
- config_name: gn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 29104
num_examples: 100
- name: test
num_bytes: 24235
num_examples: 100
- name: train
num_bytes: 28192
num_examples: 100
download_size: 35600
dataset_size: 81531
- config_name: gu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 47981
num_examples: 100
- name: test
num_bytes: 45389
num_examples: 100
- name: train
num_bytes: 42597
num_examples: 100
download_size: 44658
dataset_size: 135967
- config_name: hak
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17949
num_examples: 100
- name: test
num_bytes: 18127
num_examples: 100
- name: train
num_bytes: 16180
num_examples: 100
download_size: 27841
dataset_size: 52256
- config_name: he
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2801364
num_examples: 10000
- name: test
num_bytes: 2785446
num_examples: 10000
- name: train
num_bytes: 5600432
num_examples: 20000
download_size: 3112250
dataset_size: 11187242
- config_name: hi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 261179
num_examples: 1000
- name: test
num_bytes: 267227
num_examples: 1000
- name: train
num_bytes: 1315801
num_examples: 5000
download_size: 441664
dataset_size: 1844207
- config_name: hr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2417422
num_examples: 10000
- name: test
num_bytes: 2430412
num_examples: 10000
- name: train
num_bytes: 4877275
num_examples: 20000
download_size: 2965267
dataset_size: 9725109
- config_name: hsb
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24667
num_examples: 100
- name: test
num_bytes: 24320
num_examples: 100
- name: train
num_bytes: 24200
num_examples: 100
download_size: 31799
dataset_size: 73187
- config_name: hu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2590088
num_examples: 10000
- name: test
num_bytes: 2626743
num_examples: 10000
- name: train
num_bytes: 5263066
num_examples: 20000
download_size: 3333477
dataset_size: 10479897
- config_name: hy
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 237532
num_examples: 1000
- name: test
num_bytes: 237093
num_examples: 1000
- name: train
num_bytes: 3634009
num_examples: 15000
download_size: 1179988
dataset_size: 4108634
- config_name: ia
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 32036
num_examples: 100
- name: test
num_bytes: 37589
num_examples: 100
- name: train
num_bytes: 32900
num_examples: 100
download_size: 38484
dataset_size: 102525
- config_name: id
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1901597
num_examples: 10000
- name: test
num_bytes: 1902704
num_examples: 10000
- name: train
num_bytes: 3813991
num_examples: 20000
download_size: 2199732
dataset_size: 7618292
- config_name: ig
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17693
num_examples: 100
- name: test
num_bytes: 18404
num_examples: 100
- name: train
num_bytes: 15960
num_examples: 100
download_size: 22605
dataset_size: 52057
- config_name: ilo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 16647
num_examples: 100
- name: test
num_bytes: 17217
num_examples: 100
- name: train
num_bytes: 17124
num_examples: 100
download_size: 23906
dataset_size: 50988
- config_name: io
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 18998
num_examples: 100
- name: test
num_bytes: 17203
num_examples: 100
- name: train
num_bytes: 20753
num_examples: 100
download_size: 27554
dataset_size: 56954
- config_name: is
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 243639
num_examples: 1000
- name: test
num_bytes: 235918
num_examples: 1000
- name: train
num_bytes: 243437
num_examples: 1000
download_size: 210731
dataset_size: 722994
- config_name: it
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2282919
num_examples: 10000
- name: test
num_bytes: 2307590
num_examples: 10000
- name: train
num_bytes: 4633519
num_examples: 20000
download_size: 2818124
dataset_size: 9224028
- config_name: ja
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 6775580
num_examples: 10000
- name: test
num_bytes: 6898510
num_examples: 10000
- name: train
num_bytes: 13578269
num_examples: 20000
download_size: 3415775
dataset_size: 27252359
- config_name: jbo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 15590
num_examples: 100
- name: test
num_bytes: 19558
num_examples: 100
- name: train
num_bytes: 15042
num_examples: 100
download_size: 22634
dataset_size: 50190
- config_name: jv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17663
num_examples: 100
- name: test
num_bytes: 20175
num_examples: 100
- name: train
num_bytes: 19381
num_examples: 100
download_size: 28541
dataset_size: 57219
- config_name: ka
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 3454353
num_examples: 10000
- name: test
num_bytes: 3480842
num_examples: 10000
- name: train
num_bytes: 3427980
num_examples: 10000
download_size: 2588715
dataset_size: 10363175
- config_name: kk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 286474
num_examples: 1000
- name: test
num_bytes: 284475
num_examples: 1000
- name: train
num_bytes: 287924
num_examples: 1000
download_size: 217890
dataset_size: 858873
- config_name: km
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 29282
num_examples: 100
- name: test
num_bytes: 36073
num_examples: 100
- name: train
num_bytes: 31910
num_examples: 100
download_size: 43075
dataset_size: 97265
- config_name: kn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 36825
num_examples: 100
- name: test
num_bytes: 32250
num_examples: 100
- name: train
num_bytes: 34318
num_examples: 100
download_size: 43835
dataset_size: 103393
- config_name: ko
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2553040
num_examples: 10000
- name: test
num_bytes: 2547772
num_examples: 10000
- name: train
num_bytes: 5107034
num_examples: 20000
download_size: 3536508
dataset_size: 10207846
- config_name: ksh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 26310
num_examples: 100
- name: test
num_bytes: 25221
num_examples: 100
- name: train
num_bytes: 25913
num_examples: 100
download_size: 33350
dataset_size: 77444
- config_name: ku
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22569
num_examples: 100
- name: test
num_bytes: 20767
num_examples: 100
- name: train
num_bytes: 22641
num_examples: 100
download_size: 30470
dataset_size: 65977
- config_name: ky
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 30982
num_examples: 100
- name: test
num_bytes: 31868
num_examples: 100
- name: train
num_bytes: 32740
num_examples: 100
download_size: 41036
dataset_size: 95590
- config_name: la
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 207177
num_examples: 1000
- name: test
num_bytes: 198882
num_examples: 1000
- name: train
num_bytes: 999022
num_examples: 5000
download_size: 367324
dataset_size: 1405081
- config_name: lb
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 253746
num_examples: 1000
- name: test
num_bytes: 249961
num_examples: 1000
- name: train
num_bytes: 1260911
num_examples: 5000
download_size: 477151
dataset_size: 1764618
- config_name: li
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 20173
num_examples: 100
- name: test
num_bytes: 18789
num_examples: 100
- name: train
num_bytes: 20183
num_examples: 100
download_size: 28842
dataset_size: 59145
- config_name: lij
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 27977
num_examples: 100
- name: test
num_bytes: 27854
num_examples: 100
- name: train
num_bytes: 30553
num_examples: 100
download_size: 33981
dataset_size: 86384
- config_name: lmo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 26547
num_examples: 100
- name: test
num_bytes: 29425
num_examples: 100
- name: train
num_bytes: 24133
num_examples: 100
download_size: 32492
dataset_size: 80105
- config_name: ln
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21681
num_examples: 100
- name: test
num_bytes: 26975
num_examples: 100
- name: train
num_bytes: 22199
num_examples: 100
download_size: 28691
dataset_size: 70855
- config_name: lt
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2192846
num_examples: 10000
- name: test
num_bytes: 2191241
num_examples: 10000
- name: train
num_bytes: 2199918
num_examples: 10000
download_size: 2138545
dataset_size: 6584005
- config_name: lv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2173392
num_examples: 10000
- name: test
num_bytes: 2190430
num_examples: 10000
- name: train
num_bytes: 2206915
num_examples: 10000
download_size: 2012494
dataset_size: 6570737
- config_name: map-bms
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 19752
num_examples: 100
- name: test
num_bytes: 20530
num_examples: 100
- name: train
num_bytes: 21611
num_examples: 100
download_size: 25217
dataset_size: 61893
- config_name: mg
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24833
num_examples: 100
- name: test
num_bytes: 22542
num_examples: 100
- name: train
num_bytes: 25711
num_examples: 100
download_size: 26980
dataset_size: 73086
- config_name: mhr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 23235
num_examples: 100
- name: test
num_bytes: 23611
num_examples: 100
- name: train
num_bytes: 18620
num_examples: 100
download_size: 29844
dataset_size: 65466
- config_name: mi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 39371
num_examples: 100
- name: test
num_bytes: 40119
num_examples: 100
- name: train
num_bytes: 37868
num_examples: 100
download_size: 24626
dataset_size: 117358
- config_name: min
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 28691
num_examples: 100
- name: test
num_bytes: 24713
num_examples: 100
- name: train
num_bytes: 26592
num_examples: 100
download_size: 31058
dataset_size: 79996
- config_name: mk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 333165
num_examples: 1000
- name: test
num_bytes: 337729
num_examples: 1000
- name: train
num_bytes: 3355908
num_examples: 10000
download_size: 825847
dataset_size: 4026802
- config_name: ml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 362980
num_examples: 1000
- name: test
num_bytes: 349355
num_examples: 1000
- name: train
num_bytes: 3582038
num_examples: 10000
download_size: 1190172
dataset_size: 4294373
- config_name: mn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21978
num_examples: 100
- name: test
num_bytes: 23510
num_examples: 100
- name: train
num_bytes: 23216
num_examples: 100
download_size: 32990
dataset_size: 68704
- config_name: mr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 314830
num_examples: 1000
- name: test
num_bytes: 326262
num_examples: 1000
- name: train
num_bytes: 1598776
num_examples: 5000
download_size: 524029
dataset_size: 2239868
- config_name: ms
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 183916
num_examples: 1000
- name: test
num_bytes: 183511
num_examples: 1000
- name: train
num_bytes: 3699182
num_examples: 20000
download_size: 1077180
dataset_size: 4066609
- config_name: mt
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24543
num_examples: 100
- name: test
num_bytes: 24634
num_examples: 100
- name: train
num_bytes: 24928
num_examples: 100
download_size: 33526
dataset_size: 74105
- config_name: mwl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 51959
num_examples: 100
- name: test
num_bytes: 42980
num_examples: 100
- name: train
num_bytes: 44577
num_examples: 100
download_size: 44197
dataset_size: 139516
- config_name: my
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 48925
num_examples: 100
- name: test
num_bytes: 45928
num_examples: 100
- name: train
num_bytes: 41343
num_examples: 100
download_size: 51490
dataset_size: 136196
- config_name: mzn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 25276
num_examples: 100
- name: test
num_bytes: 25919
num_examples: 100
- name: train
num_bytes: 24813
num_examples: 100
download_size: 29895
dataset_size: 76008
- config_name: nap
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21518
num_examples: 100
- name: test
num_bytes: 24166
num_examples: 100
- name: train
num_bytes: 26568
num_examples: 100
download_size: 30764
dataset_size: 72252
- config_name: nds
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 28360
num_examples: 100
- name: test
num_bytes: 26543
num_examples: 100
- name: train
num_bytes: 24651
num_examples: 100
download_size: 33734
dataset_size: 79554
- config_name: ne
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 33904
num_examples: 100
- name: test
num_bytes: 33199
num_examples: 100
- name: train
num_bytes: 36145
num_examples: 100
download_size: 37920
dataset_size: 103248
- config_name: nl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2378052
num_examples: 10000
- name: test
num_bytes: 2403048
num_examples: 10000
- name: train
num_bytes: 4784233
num_examples: 20000
download_size: 2867129
dataset_size: 9565333
- config_name: nn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 274112
num_examples: 1000
- name: test
num_bytes: 269603
num_examples: 1000
- name: train
num_bytes: 5436129
num_examples: 20000
download_size: 1644504
dataset_size: 5979844
- config_name: 'no'
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2576641
num_examples: 10000
- name: test
num_bytes: 2563531
num_examples: 10000
- name: train
num_bytes: 5139492
num_examples: 20000
download_size: 3063453
dataset_size: 10279664
- config_name: nov
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 14828
num_examples: 100
- name: test
num_bytes: 14802
num_examples: 100
- name: train
num_bytes: 17242
num_examples: 100
download_size: 20235
dataset_size: 46872
- config_name: oc
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 20400
num_examples: 100
- name: test
num_bytes: 18572
num_examples: 100
- name: train
num_bytes: 19291
num_examples: 100
download_size: 29284
dataset_size: 58263
- config_name: or
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 32103
num_examples: 100
- name: test
num_bytes: 29480
num_examples: 100
- name: train
num_bytes: 27794
num_examples: 100
download_size: 31116
dataset_size: 89377
- config_name: os
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 26751
num_examples: 100
- name: test
num_bytes: 25967
num_examples: 100
- name: train
num_bytes: 26005
num_examples: 100
download_size: 32948
dataset_size: 78723
- config_name: pa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 25202
num_examples: 100
- name: test
num_bytes: 23680
num_examples: 100
- name: train
num_bytes: 24143
num_examples: 100
download_size: 31528
dataset_size: 73025
- config_name: pdc
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24391
num_examples: 100
- name: test
num_bytes: 24646
num_examples: 100
- name: train
num_bytes: 23963
num_examples: 100
download_size: 28409
dataset_size: 73000
- config_name: pl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2448296
num_examples: 10000
- name: test
num_bytes: 2463755
num_examples: 10000
- name: train
num_bytes: 4851471
num_examples: 20000
download_size: 3300030
dataset_size: 9763522
- config_name: pms
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 28341
num_examples: 100
- name: test
num_bytes: 23987
num_examples: 100
- name: train
num_bytes: 27401
num_examples: 100
download_size: 34986
dataset_size: 79729
- config_name: pnb
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 19042
num_examples: 100
- name: test
num_bytes: 21178
num_examples: 100
- name: train
num_bytes: 19476
num_examples: 100
download_size: 25001
dataset_size: 59696
- config_name: ps
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 49873
num_examples: 100
- name: test
num_bytes: 43593
num_examples: 100
- name: train
num_bytes: 63473
num_examples: 100
download_size: 45676
dataset_size: 156939
- config_name: pt
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1962117
num_examples: 10000
- name: test
num_bytes: 1946701
num_examples: 10000
- name: train
num_bytes: 3917397
num_examples: 20000
download_size: 2523476
dataset_size: 7826215
- config_name: qu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 18203
num_examples: 100
- name: test
num_bytes: 17647
num_examples: 100
- name: train
num_bytes: 16961
num_examples: 100
download_size: 26577
dataset_size: 52811
- config_name: rm
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 32748
num_examples: 100
- name: test
num_bytes: 35852
num_examples: 100
- name: train
num_bytes: 30461
num_examples: 100
download_size: 38504
dataset_size: 99061
- config_name: ro
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2063832
num_examples: 10000
- name: test
num_bytes: 2060905
num_examples: 10000
- name: train
num_bytes: 4179813
num_examples: 20000
download_size: 2533230
dataset_size: 8304550
- config_name: ru
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2574518
num_examples: 10000
- name: test
num_bytes: 2597220
num_examples: 10000
- name: train
num_bytes: 5175609
num_examples: 20000
download_size: 3250185
dataset_size: 10347347
- config_name: rw
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17971
num_examples: 100
- name: test
num_bytes: 14417
num_examples: 100
- name: train
num_bytes: 16750
num_examples: 100
download_size: 25845
dataset_size: 49138
- config_name: sa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 45693
num_examples: 100
- name: test
num_bytes: 49181
num_examples: 100
- name: train
num_bytes: 52476
num_examples: 100
download_size: 50112
dataset_size: 147350
- config_name: sah
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 27847
num_examples: 100
- name: test
num_bytes: 26825
num_examples: 100
- name: train
num_bytes: 27013
num_examples: 100
download_size: 34322
dataset_size: 81685
- config_name: scn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 20077
num_examples: 100
- name: test
num_bytes: 17356
num_examples: 100
- name: train
num_bytes: 21004
num_examples: 100
download_size: 28158
dataset_size: 58437
- config_name: sco
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22187
num_examples: 100
- name: test
num_bytes: 21561
num_examples: 100
- name: train
num_bytes: 20280
num_examples: 100
download_size: 30781
dataset_size: 64028
- config_name: sd
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 51527
num_examples: 100
- name: test
num_bytes: 38506
num_examples: 100
- name: train
num_bytes: 56897
num_examples: 100
download_size: 44883
dataset_size: 146930
- config_name: sh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1789890
num_examples: 10000
- name: test
num_bytes: 1791463
num_examples: 10000
- name: train
num_bytes: 3583577
num_examples: 20000
download_size: 2027654
dataset_size: 7164930
- config_name: si
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 30817
num_examples: 100
- name: test
num_bytes: 29313
num_examples: 100
- name: train
num_bytes: 31227
num_examples: 100
download_size: 33979
dataset_size: 91357
- config_name: simple
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 247119
num_examples: 1000
- name: test
num_bytes: 245330
num_examples: 1000
- name: train
num_bytes: 4921860
num_examples: 20000
download_size: 1301730
dataset_size: 5414309
- config_name: sk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2342033
num_examples: 10000
- name: test
num_bytes: 2334981
num_examples: 10000
- name: train
num_bytes: 4701497
num_examples: 20000
download_size: 2944919
dataset_size: 9378511
- config_name: sl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2090219
num_examples: 10000
- name: test
num_bytes: 2133463
num_examples: 10000
- name: train
num_bytes: 3158620
num_examples: 15000
download_size: 2146455
dataset_size: 7382302
- config_name: so
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21836
num_examples: 100
- name: test
num_bytes: 17191
num_examples: 100
- name: train
num_bytes: 23752
num_examples: 100
download_size: 27097
dataset_size: 62779
- config_name: sq
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 210860
num_examples: 1000
- name: test
num_bytes: 209796
num_examples: 1000
- name: train
num_bytes: 1052359
num_examples: 5000
download_size: 366247
dataset_size: 1473015
- config_name: sr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2548362
num_examples: 10000
- name: test
num_bytes: 2564803
num_examples: 10000
- name: train
num_bytes: 5105513
num_examples: 20000
download_size: 2932854
dataset_size: 10218678
- config_name: su
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22577
num_examples: 100
- name: test
num_bytes: 21833
num_examples: 100
- name: train
num_bytes: 20811
num_examples: 100
download_size: 30722
dataset_size: 65221
- config_name: sv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2678644
num_examples: 10000
- name: test
num_bytes: 2719049
num_examples: 10000
- name: train
num_bytes: 5395666
num_examples: 20000
download_size: 2565949
dataset_size: 10793359
- config_name: sw
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 168791
num_examples: 1000
- name: test
num_bytes: 172665
num_examples: 1000
- name: train
num_bytes: 168721
num_examples: 1000
download_size: 135814
dataset_size: 510177
- config_name: szl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 19369
num_examples: 100
- name: test
num_bytes: 18939
num_examples: 100
- name: train
num_bytes: 17618
num_examples: 100
download_size: 27450
dataset_size: 55926
- config_name: ta
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 354929
num_examples: 1000
- name: test
num_bytes: 357639
num_examples: 1000
- name: train
num_bytes: 5275703
num_examples: 15000
download_size: 1527540
dataset_size: 5988271
- config_name: te
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 356161
num_examples: 1000
- name: test
num_bytes: 359752
num_examples: 1000
- name: train
num_bytes: 358764
num_examples: 1000
download_size: 260846
dataset_size: 1074677
- config_name: tg
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 27102
num_examples: 100
- name: test
num_bytes: 28793
num_examples: 100
- name: train
num_bytes: 27172
num_examples: 100
download_size: 33712
dataset_size: 83067
- config_name: th
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 14189715
num_examples: 10000
- name: test
num_bytes: 14505026
num_examples: 10000
- name: train
num_bytes: 28968860
num_examples: 20000
download_size: 3962089
dataset_size: 57663601
- config_name: tk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 21583
num_examples: 100
- name: test
num_bytes: 20274
num_examples: 100
- name: train
num_bytes: 19493
num_examples: 100
download_size: 30395
dataset_size: 61350
- config_name: tl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 148654
num_examples: 1000
- name: test
num_bytes: 152936
num_examples: 1000
- name: train
num_bytes: 1518756
num_examples: 10000
download_size: 521471
dataset_size: 1820346
- config_name: tr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2280489
num_examples: 10000
- name: test
num_bytes: 2276892
num_examples: 10000
- name: train
num_bytes: 4501856
num_examples: 20000
download_size: 2907624
dataset_size: 9059237
- config_name: tt
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 282507
num_examples: 1000
- name: test
num_bytes: 282663
num_examples: 1000
- name: train
num_bytes: 283364
num_examples: 1000
download_size: 174234
dataset_size: 848534
- config_name: ug
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 35191
num_examples: 100
- name: test
num_bytes: 31101
num_examples: 100
- name: train
num_bytes: 26592
num_examples: 100
download_size: 38383
dataset_size: 92884
- config_name: uk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 2934869
num_examples: 10000
- name: test
num_bytes: 2928172
num_examples: 10000
- name: train
num_bytes: 5927970
num_examples: 20000
download_size: 3214083
dataset_size: 11791011
- config_name: ur
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 203719
num_examples: 1000
- name: test
num_bytes: 203110
num_examples: 1000
- name: train
num_bytes: 4108651
num_examples: 20000
download_size: 1140630
dataset_size: 4515480
- config_name: uz
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 184597
num_examples: 1000
- name: test
num_bytes: 184685
num_examples: 1000
- name: train
num_bytes: 186077
num_examples: 1000
download_size: 121267
dataset_size: 555359
- config_name: vec
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 19307
num_examples: 100
- name: test
num_bytes: 20226
num_examples: 100
- name: train
num_bytes: 20409
num_examples: 100
download_size: 27538
dataset_size: 59942
- config_name: vep
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22278
num_examples: 100
- name: test
num_bytes: 21343
num_examples: 100
- name: train
num_bytes: 21359
num_examples: 100
download_size: 29630
dataset_size: 64980
- config_name: vi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 1944828
num_examples: 10000
- name: test
num_bytes: 1959996
num_examples: 10000
- name: train
num_bytes: 3915888
num_examples: 20000
download_size: 2283112
dataset_size: 7820712
- config_name: vls
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 27867
num_examples: 100
- name: test
num_bytes: 26750
num_examples: 100
- name: train
num_bytes: 26155
num_examples: 100
download_size: 33972
dataset_size: 80772
- config_name: vo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 14357
num_examples: 100
- name: test
num_bytes: 13973
num_examples: 100
- name: train
num_bytes: 14414
num_examples: 100
download_size: 20368
dataset_size: 42744
- config_name: wa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 22465
num_examples: 100
- name: test
num_bytes: 21553
num_examples: 100
- name: train
num_bytes: 23044
num_examples: 100
download_size: 28716
dataset_size: 67062
- config_name: war
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 16806
num_examples: 100
- name: test
num_bytes: 19884
num_examples: 100
- name: train
num_bytes: 18801
num_examples: 100
download_size: 26342
dataset_size: 55491
- config_name: wuu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 15095
num_examples: 100
- name: test
num_bytes: 15039
num_examples: 100
- name: train
num_bytes: 16988
num_examples: 100
download_size: 34843
dataset_size: 47122
- config_name: xmf
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 39951
num_examples: 100
- name: test
num_bytes: 36053
num_examples: 100
- name: train
num_bytes: 31768
num_examples: 100
download_size: 38339
dataset_size: 107772
- config_name: yi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 25241
num_examples: 100
- name: test
num_bytes: 24977
num_examples: 100
- name: train
num_bytes: 27275
num_examples: 100
download_size: 30693
dataset_size: 77493
- config_name: yo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 17710
num_examples: 100
- name: test
num_bytes: 17968
num_examples: 100
- name: train
num_bytes: 18956
num_examples: 100
download_size: 26565
dataset_size: 54634
- config_name: zea
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24888
num_examples: 100
- name: test
num_bytes: 22969
num_examples: 100
- name: train
num_bytes: 21224
num_examples: 100
download_size: 28533
dataset_size: 69081
- config_name: zh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 4839700
num_examples: 10000
- name: test
num_bytes: 4709430
num_examples: 10000
- name: train
num_bytes: 9524925
num_examples: 20000
download_size: 2896220
dataset_size: 19074055
- config_name: zh-classical
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 59952
num_examples: 100
- name: test
num_bytes: 65857
num_examples: 100
- name: train
num_bytes: 56210
num_examples: 100
download_size: 31946
dataset_size: 182019
- config_name: zh-min-nan
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 24505
num_examples: 100
- name: test
num_bytes: 24298
num_examples: 100
- name: train
num_bytes: 19330
num_examples: 100
download_size: 26515
dataset_size: 68133
- config_name: zh-yue
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: validation
num_bytes: 4934130
num_examples: 10000
- name: test
num_bytes: 4964001
num_examples: 10000
- name: train
num_bytes: 9950573
num_examples: 20000
download_size: 2342825
dataset_size: 19848704
configs:
- config_name: ace
data_files:
- split: validation
path: ace/validation-*
- split: test
path: ace/test-*
- split: train
path: ace/train-*
- config_name: af
data_files:
- split: validation
path: af/validation-*
- split: test
path: af/test-*
- split: train
path: af/train-*
- config_name: als
data_files:
- split: validation
path: als/validation-*
- split: test
path: als/test-*
- split: train
path: als/train-*
- config_name: am
data_files:
- split: validation
path: am/validation-*
- split: test
path: am/test-*
- split: train
path: am/train-*
- config_name: an
data_files:
- split: validation
path: an/validation-*
- split: test
path: an/test-*
- split: train
path: an/train-*
- config_name: ang
data_files:
- split: validation
path: ang/validation-*
- split: test
path: ang/test-*
- split: train
path: ang/train-*
- config_name: ar
data_files:
- split: validation
path: ar/validation-*
- split: test
path: ar/test-*
- split: train
path: ar/train-*
- config_name: arc
data_files:
- split: validation
path: arc/validation-*
- split: test
path: arc/test-*
- split: train
path: arc/train-*
- config_name: arz
data_files:
- split: validation
path: arz/validation-*
- split: test
path: arz/test-*
- split: train
path: arz/train-*
- config_name: as
data_files:
- split: validation
path: as/validation-*
- split: test
path: as/test-*
- split: train
path: as/train-*
- config_name: ast
data_files:
- split: validation
path: ast/validation-*
- split: test
path: ast/test-*
- split: train
path: ast/train-*
- config_name: ay
data_files:
- split: validation
path: ay/validation-*
- split: test
path: ay/test-*
- split: train
path: ay/train-*
- config_name: az
data_files:
- split: validation
path: az/validation-*
- split: test
path: az/test-*
- split: train
path: az/train-*
- config_name: ba
data_files:
- split: validation
path: ba/validation-*
- split: test
path: ba/test-*
- split: train
path: ba/train-*
- config_name: bar
data_files:
- split: validation
path: bar/validation-*
- split: test
path: bar/test-*
- split: train
path: bar/train-*
- config_name: bat-smg
data_files:
- split: validation
path: bat-smg/validation-*
- split: test
path: bat-smg/test-*
- split: train
path: bat-smg/train-*
- config_name: be
data_files:
- split: validation
path: be/validation-*
- split: test
path: be/test-*
- split: train
path: be/train-*
- config_name: be-x-old
data_files:
- split: validation
path: be-x-old/validation-*
- split: test
path: be-x-old/test-*
- split: train
path: be-x-old/train-*
- config_name: bg
data_files:
- split: validation
path: bg/validation-*
- split: test
path: bg/test-*
- split: train
path: bg/train-*
- config_name: bh
data_files:
- split: validation
path: bh/validation-*
- split: test
path: bh/test-*
- split: train
path: bh/train-*
- config_name: bn
data_files:
- split: validation
path: bn/validation-*
- split: test
path: bn/test-*
- split: train
path: bn/train-*
- config_name: bo
data_files:
- split: validation
path: bo/validation-*
- split: test
path: bo/test-*
- split: train
path: bo/train-*
- config_name: br
data_files:
- split: validation
path: br/validation-*
- split: test
path: br/test-*
- split: train
path: br/train-*
- config_name: bs
data_files:
- split: validation
path: bs/validation-*
- split: test
path: bs/test-*
- split: train
path: bs/train-*
- config_name: ca
data_files:
- split: validation
path: ca/validation-*
- split: test
path: ca/test-*
- split: train
path: ca/train-*
- config_name: cbk-zam
data_files:
- split: validation
path: cbk-zam/validation-*
- split: test
path: cbk-zam/test-*
- split: train
path: cbk-zam/train-*
- config_name: cdo
data_files:
- split: validation
path: cdo/validation-*
- split: test
path: cdo/test-*
- split: train
path: cdo/train-*
- config_name: ce
data_files:
- split: validation
path: ce/validation-*
- split: test
path: ce/test-*
- split: train
path: ce/train-*
- config_name: ceb
data_files:
- split: validation
path: ceb/validation-*
- split: test
path: ceb/test-*
- split: train
path: ceb/train-*
- config_name: ckb
data_files:
- split: validation
path: ckb/validation-*
- split: test
path: ckb/test-*
- split: train
path: ckb/train-*
- config_name: co
data_files:
- split: validation
path: co/validation-*
- split: test
path: co/test-*
- split: train
path: co/train-*
- config_name: crh
data_files:
- split: validation
path: crh/validation-*
- split: test
path: crh/test-*
- split: train
path: crh/train-*
- config_name: cs
data_files:
- split: validation
path: cs/validation-*
- split: test
path: cs/test-*
- split: train
path: cs/train-*
- config_name: csb
data_files:
- split: validation
path: csb/validation-*
- split: test
path: csb/test-*
- split: train
path: csb/train-*
- config_name: cv
data_files:
- split: validation
path: cv/validation-*
- split: test
path: cv/test-*
- split: train
path: cv/train-*
- config_name: cy
data_files:
- split: validation
path: cy/validation-*
- split: test
path: cy/test-*
- split: train
path: cy/train-*
- config_name: da
data_files:
- split: validation
path: da/validation-*
- split: test
path: da/test-*
- split: train
path: da/train-*
- config_name: de
data_files:
- split: validation
path: de/validation-*
- split: test
path: de/test-*
- split: train
path: de/train-*
- config_name: diq
data_files:
- split: validation
path: diq/validation-*
- split: test
path: diq/test-*
- split: train
path: diq/train-*
- config_name: dv
data_files:
- split: validation
path: dv/validation-*
- split: test
path: dv/test-*
- split: train
path: dv/train-*
- config_name: el
data_files:
- split: validation
path: el/validation-*
- split: test
path: el/test-*
- split: train
path: el/train-*
- config_name: eml
data_files:
- split: validation
path: eml/validation-*
- split: test
path: eml/test-*
- split: train
path: eml/train-*
- config_name: en
data_files:
- split: validation
path: en/validation-*
- split: test
path: en/test-*
- split: train
path: en/train-*
- config_name: eo
data_files:
- split: validation
path: eo/validation-*
- split: test
path: eo/test-*
- split: train
path: eo/train-*
- config_name: es
data_files:
- split: validation
path: es/validation-*
- split: test
path: es/test-*
- split: train
path: es/train-*
- config_name: et
data_files:
- split: validation
path: et/validation-*
- split: test
path: et/test-*
- split: train
path: et/train-*
- config_name: eu
data_files:
- split: validation
path: eu/validation-*
- split: test
path: eu/test-*
- split: train
path: eu/train-*
- config_name: ext
data_files:
- split: validation
path: ext/validation-*
- split: test
path: ext/test-*
- split: train
path: ext/train-*
- config_name: fa
data_files:
- split: validation
path: fa/validation-*
- split: test
path: fa/test-*
- split: train
path: fa/train-*
- config_name: fi
data_files:
- split: validation
path: fi/validation-*
- split: test
path: fi/test-*
- split: train
path: fi/train-*
- config_name: fiu-vro
data_files:
- split: validation
path: fiu-vro/validation-*
- split: test
path: fiu-vro/test-*
- split: train
path: fiu-vro/train-*
- config_name: fo
data_files:
- split: validation
path: fo/validation-*
- split: test
path: fo/test-*
- split: train
path: fo/train-*
- config_name: fr
data_files:
- split: validation
path: fr/validation-*
- split: test
path: fr/test-*
- split: train
path: fr/train-*
- config_name: frr
data_files:
- split: validation
path: frr/validation-*
- split: test
path: frr/test-*
- split: train
path: frr/train-*
- config_name: fur
data_files:
- split: validation
path: fur/validation-*
- split: test
path: fur/test-*
- split: train
path: fur/train-*
- config_name: fy
data_files:
- split: validation
path: fy/validation-*
- split: test
path: fy/test-*
- split: train
path: fy/train-*
- config_name: ga
data_files:
- split: validation
path: ga/validation-*
- split: test
path: ga/test-*
- split: train
path: ga/train-*
- config_name: gan
data_files:
- split: validation
path: gan/validation-*
- split: test
path: gan/test-*
- split: train
path: gan/train-*
- config_name: gd
data_files:
- split: validation
path: gd/validation-*
- split: test
path: gd/test-*
- split: train
path: gd/train-*
- config_name: gl
data_files:
- split: validation
path: gl/validation-*
- split: test
path: gl/test-*
- split: train
path: gl/train-*
- config_name: gn
data_files:
- split: validation
path: gn/validation-*
- split: test
path: gn/test-*
- split: train
path: gn/train-*
- config_name: gu
data_files:
- split: validation
path: gu/validation-*
- split: test
path: gu/test-*
- split: train
path: gu/train-*
- config_name: hak
data_files:
- split: validation
path: hak/validation-*
- split: test
path: hak/test-*
- split: train
path: hak/train-*
- config_name: he
data_files:
- split: validation
path: he/validation-*
- split: test
path: he/test-*
- split: train
path: he/train-*
- config_name: hi
data_files:
- split: validation
path: hi/validation-*
- split: test
path: hi/test-*
- split: train
path: hi/train-*
- config_name: hr
data_files:
- split: validation
path: hr/validation-*
- split: test
path: hr/test-*
- split: train
path: hr/train-*
- config_name: hsb
data_files:
- split: validation
path: hsb/validation-*
- split: test
path: hsb/test-*
- split: train
path: hsb/train-*
- config_name: hu
data_files:
- split: validation
path: hu/validation-*
- split: test
path: hu/test-*
- split: train
path: hu/train-*
- config_name: hy
data_files:
- split: validation
path: hy/validation-*
- split: test
path: hy/test-*
- split: train
path: hy/train-*
- config_name: ia
data_files:
- split: validation
path: ia/validation-*
- split: test
path: ia/test-*
- split: train
path: ia/train-*
- config_name: id
data_files:
- split: validation
path: id/validation-*
- split: test
path: id/test-*
- split: train
path: id/train-*
- config_name: ig
data_files:
- split: validation
path: ig/validation-*
- split: test
path: ig/test-*
- split: train
path: ig/train-*
- config_name: ilo
data_files:
- split: validation
path: ilo/validation-*
- split: test
path: ilo/test-*
- split: train
path: ilo/train-*
- config_name: io
data_files:
- split: validation
path: io/validation-*
- split: test
path: io/test-*
- split: train
path: io/train-*
- config_name: is
data_files:
- split: validation
path: is/validation-*
- split: test
path: is/test-*
- split: train
path: is/train-*
- config_name: it
data_files:
- split: validation
path: it/validation-*
- split: test
path: it/test-*
- split: train
path: it/train-*
- config_name: ja
data_files:
- split: validation
path: ja/validation-*
- split: test
path: ja/test-*
- split: train
path: ja/train-*
- config_name: jbo
data_files:
- split: validation
path: jbo/validation-*
- split: test
path: jbo/test-*
- split: train
path: jbo/train-*
- config_name: jv
data_files:
- split: validation
path: jv/validation-*
- split: test
path: jv/test-*
- split: train
path: jv/train-*
- config_name: ka
data_files:
- split: validation
path: ka/validation-*
- split: test
path: ka/test-*
- split: train
path: ka/train-*
- config_name: kk
data_files:
- split: validation
path: kk/validation-*
- split: test
path: kk/test-*
- split: train
path: kk/train-*
- config_name: km
data_files:
- split: validation
path: km/validation-*
- split: test
path: km/test-*
- split: train
path: km/train-*
- config_name: kn
data_files:
- split: validation
path: kn/validation-*
- split: test
path: kn/test-*
- split: train
path: kn/train-*
- config_name: ko
data_files:
- split: validation
path: ko/validation-*
- split: test
path: ko/test-*
- split: train
path: ko/train-*
- config_name: ksh
data_files:
- split: validation
path: ksh/validation-*
- split: test
path: ksh/test-*
- split: train
path: ksh/train-*
- config_name: ku
data_files:
- split: validation
path: ku/validation-*
- split: test
path: ku/test-*
- split: train
path: ku/train-*
- config_name: ky
data_files:
- split: validation
path: ky/validation-*
- split: test
path: ky/test-*
- split: train
path: ky/train-*
- config_name: la
data_files:
- split: validation
path: la/validation-*
- split: test
path: la/test-*
- split: train
path: la/train-*
- config_name: lb
data_files:
- split: validation
path: lb/validation-*
- split: test
path: lb/test-*
- split: train
path: lb/train-*
- config_name: li
data_files:
- split: validation
path: li/validation-*
- split: test
path: li/test-*
- split: train
path: li/train-*
- config_name: lij
data_files:
- split: validation
path: lij/validation-*
- split: test
path: lij/test-*
- split: train
path: lij/train-*
- config_name: lmo
data_files:
- split: validation
path: lmo/validation-*
- split: test
path: lmo/test-*
- split: train
path: lmo/train-*
- config_name: ln
data_files:
- split: validation
path: ln/validation-*
- split: test
path: ln/test-*
- split: train
path: ln/train-*
- config_name: lt
data_files:
- split: validation
path: lt/validation-*
- split: test
path: lt/test-*
- split: train
path: lt/train-*
- config_name: lv
data_files:
- split: validation
path: lv/validation-*
- split: test
path: lv/test-*
- split: train
path: lv/train-*
- config_name: map-bms
data_files:
- split: validation
path: map-bms/validation-*
- split: test
path: map-bms/test-*
- split: train
path: map-bms/train-*
- config_name: mg
data_files:
- split: validation
path: mg/validation-*
- split: test
path: mg/test-*
- split: train
path: mg/train-*
- config_name: mhr
data_files:
- split: validation
path: mhr/validation-*
- split: test
path: mhr/test-*
- split: train
path: mhr/train-*
- config_name: mi
data_files:
- split: validation
path: mi/validation-*
- split: test
path: mi/test-*
- split: train
path: mi/train-*
- config_name: min
data_files:
- split: validation
path: min/validation-*
- split: test
path: min/test-*
- split: train
path: min/train-*
- config_name: mk
data_files:
- split: validation
path: mk/validation-*
- split: test
path: mk/test-*
- split: train
path: mk/train-*
- config_name: ml
data_files:
- split: validation
path: ml/validation-*
- split: test
path: ml/test-*
- split: train
path: ml/train-*
- config_name: mn
data_files:
- split: validation
path: mn/validation-*
- split: test
path: mn/test-*
- split: train
path: mn/train-*
- config_name: mr
data_files:
- split: validation
path: mr/validation-*
- split: test
path: mr/test-*
- split: train
path: mr/train-*
- config_name: ms
data_files:
- split: validation
path: ms/validation-*
- split: test
path: ms/test-*
- split: train
path: ms/train-*
- config_name: mt
data_files:
- split: validation
path: mt/validation-*
- split: test
path: mt/test-*
- split: train
path: mt/train-*
- config_name: mwl
data_files:
- split: validation
path: mwl/validation-*
- split: test
path: mwl/test-*
- split: train
path: mwl/train-*
- config_name: my
data_files:
- split: validation
path: my/validation-*
- split: test
path: my/test-*
- split: train
path: my/train-*
- config_name: mzn
data_files:
- split: validation
path: mzn/validation-*
- split: test
path: mzn/test-*
- split: train
path: mzn/train-*
- config_name: nap
data_files:
- split: validation
path: nap/validation-*
- split: test
path: nap/test-*
- split: train
path: nap/train-*
- config_name: nds
data_files:
- split: validation
path: nds/validation-*
- split: test
path: nds/test-*
- split: train
path: nds/train-*
- config_name: ne
data_files:
- split: validation
path: ne/validation-*
- split: test
path: ne/test-*
- split: train
path: ne/train-*
- config_name: nl
data_files:
- split: validation
path: nl/validation-*
- split: test
path: nl/test-*
- split: train
path: nl/train-*
- config_name: nn
data_files:
- split: validation
path: nn/validation-*
- split: test
path: nn/test-*
- split: train
path: nn/train-*
- config_name: 'no'
data_files:
- split: validation
path: no/validation-*
- split: test
path: no/test-*
- split: train
path: no/train-*
- config_name: nov
data_files:
- split: validation
path: nov/validation-*
- split: test
path: nov/test-*
- split: train
path: nov/train-*
- config_name: oc
data_files:
- split: validation
path: oc/validation-*
- split: test
path: oc/test-*
- split: train
path: oc/train-*
- config_name: or
data_files:
- split: validation
path: or/validation-*
- split: test
path: or/test-*
- split: train
path: or/train-*
- config_name: os
data_files:
- split: validation
path: os/validation-*
- split: test
path: os/test-*
- split: train
path: os/train-*
- config_name: pa
data_files:
- split: validation
path: pa/validation-*
- split: test
path: pa/test-*
- split: train
path: pa/train-*
- config_name: pdc
data_files:
- split: validation
path: pdc/validation-*
- split: test
path: pdc/test-*
- split: train
path: pdc/train-*
- config_name: pl
data_files:
- split: validation
path: pl/validation-*
- split: test
path: pl/test-*
- split: train
path: pl/train-*
- config_name: pms
data_files:
- split: validation
path: pms/validation-*
- split: test
path: pms/test-*
- split: train
path: pms/train-*
- config_name: pnb
data_files:
- split: validation
path: pnb/validation-*
- split: test
path: pnb/test-*
- split: train
path: pnb/train-*
- config_name: ps
data_files:
- split: validation
path: ps/validation-*
- split: test
path: ps/test-*
- split: train
path: ps/train-*
- config_name: pt
data_files:
- split: validation
path: pt/validation-*
- split: test
path: pt/test-*
- split: train
path: pt/train-*
- config_name: qu
data_files:
- split: validation
path: qu/validation-*
- split: test
path: qu/test-*
- split: train
path: qu/train-*
- config_name: rm
data_files:
- split: validation
path: rm/validation-*
- split: test
path: rm/test-*
- split: train
path: rm/train-*
- config_name: ro
data_files:
- split: validation
path: ro/validation-*
- split: test
path: ro/test-*
- split: train
path: ro/train-*
- config_name: ru
data_files:
- split: validation
path: ru/validation-*
- split: test
path: ru/test-*
- split: train
path: ru/train-*
- config_name: rw
data_files:
- split: validation
path: rw/validation-*
- split: test
path: rw/test-*
- split: train
path: rw/train-*
- config_name: sa
data_files:
- split: validation
path: sa/validation-*
- split: test
path: sa/test-*
- split: train
path: sa/train-*
- config_name: sah
data_files:
- split: validation
path: sah/validation-*
- split: test
path: sah/test-*
- split: train
path: sah/train-*
- config_name: scn
data_files:
- split: validation
path: scn/validation-*
- split: test
path: scn/test-*
- split: train
path: scn/train-*
- config_name: sco
data_files:
- split: validation
path: sco/validation-*
- split: test
path: sco/test-*
- split: train
path: sco/train-*
- config_name: sd
data_files:
- split: validation
path: sd/validation-*
- split: test
path: sd/test-*
- split: train
path: sd/train-*
- config_name: sh
data_files:
- split: validation
path: sh/validation-*
- split: test
path: sh/test-*
- split: train
path: sh/train-*
- config_name: si
data_files:
- split: validation
path: si/validation-*
- split: test
path: si/test-*
- split: train
path: si/train-*
- config_name: simple
data_files:
- split: validation
path: simple/validation-*
- split: test
path: simple/test-*
- split: train
path: simple/train-*
- config_name: sk
data_files:
- split: validation
path: sk/validation-*
- split: test
path: sk/test-*
- split: train
path: sk/train-*
- config_name: sl
data_files:
- split: validation
path: sl/validation-*
- split: test
path: sl/test-*
- split: train
path: sl/train-*
- config_name: so
data_files:
- split: validation
path: so/validation-*
- split: test
path: so/test-*
- split: train
path: so/train-*
- config_name: sq
data_files:
- split: validation
path: sq/validation-*
- split: test
path: sq/test-*
- split: train
path: sq/train-*
- config_name: sr
data_files:
- split: validation
path: sr/validation-*
- split: test
path: sr/test-*
- split: train
path: sr/train-*
- config_name: su
data_files:
- split: validation
path: su/validation-*
- split: test
path: su/test-*
- split: train
path: su/train-*
- config_name: sv
data_files:
- split: validation
path: sv/validation-*
- split: test
path: sv/test-*
- split: train
path: sv/train-*
- config_name: sw
data_files:
- split: validation
path: sw/validation-*
- split: test
path: sw/test-*
- split: train
path: sw/train-*
- config_name: szl
data_files:
- split: validation
path: szl/validation-*
- split: test
path: szl/test-*
- split: train
path: szl/train-*
- config_name: ta
data_files:
- split: validation
path: ta/validation-*
- split: test
path: ta/test-*
- split: train
path: ta/train-*
- config_name: te
data_files:
- split: validation
path: te/validation-*
- split: test
path: te/test-*
- split: train
path: te/train-*
- config_name: tg
data_files:
- split: validation
path: tg/validation-*
- split: test
path: tg/test-*
- split: train
path: tg/train-*
- config_name: th
data_files:
- split: validation
path: th/validation-*
- split: test
path: th/test-*
- split: train
path: th/train-*
- config_name: tk
data_files:
- split: validation
path: tk/validation-*
- split: test
path: tk/test-*
- split: train
path: tk/train-*
- config_name: tl
data_files:
- split: validation
path: tl/validation-*
- split: test
path: tl/test-*
- split: train
path: tl/train-*
- config_name: tr
data_files:
- split: validation
path: tr/validation-*
- split: test
path: tr/test-*
- split: train
path: tr/train-*
- config_name: tt
data_files:
- split: validation
path: tt/validation-*
- split: test
path: tt/test-*
- split: train
path: tt/train-*
- config_name: ug
data_files:
- split: validation
path: ug/validation-*
- split: test
path: ug/test-*
- split: train
path: ug/train-*
- config_name: uk
data_files:
- split: validation
path: uk/validation-*
- split: test
path: uk/test-*
- split: train
path: uk/train-*
- config_name: ur
data_files:
- split: validation
path: ur/validation-*
- split: test
path: ur/test-*
- split: train
path: ur/train-*
- config_name: uz
data_files:
- split: validation
path: uz/validation-*
- split: test
path: uz/test-*
- split: train
path: uz/train-*
- config_name: vec
data_files:
- split: validation
path: vec/validation-*
- split: test
path: vec/test-*
- split: train
path: vec/train-*
- config_name: vep
data_files:
- split: validation
path: vep/validation-*
- split: test
path: vep/test-*
- split: train
path: vep/train-*
- config_name: vi
data_files:
- split: validation
path: vi/validation-*
- split: test
path: vi/test-*
- split: train
path: vi/train-*
- config_name: vls
data_files:
- split: validation
path: vls/validation-*
- split: test
path: vls/test-*
- split: train
path: vls/train-*
- config_name: vo
data_files:
- split: validation
path: vo/validation-*
- split: test
path: vo/test-*
- split: train
path: vo/train-*
- config_name: wa
data_files:
- split: validation
path: wa/validation-*
- split: test
path: wa/test-*
- split: train
path: wa/train-*
- config_name: war
data_files:
- split: validation
path: war/validation-*
- split: test
path: war/test-*
- split: train
path: war/train-*
- config_name: wuu
data_files:
- split: validation
path: wuu/validation-*
- split: test
path: wuu/test-*
- split: train
path: wuu/train-*
- config_name: xmf
data_files:
- split: validation
path: xmf/validation-*
- split: test
path: xmf/test-*
- split: train
path: xmf/train-*
- config_name: yi
data_files:
- split: validation
path: yi/validation-*
- split: test
path: yi/test-*
- split: train
path: yi/train-*
- config_name: yo
data_files:
- split: validation
path: yo/validation-*
- split: test
path: yo/test-*
- split: train
path: yo/train-*
- config_name: zea
data_files:
- split: validation
path: zea/validation-*
- split: test
path: zea/test-*
- split: train
path: zea/train-*
- config_name: zh
data_files:
- split: validation
path: zh/validation-*
- split: test
path: zh/test-*
- split: train
path: zh/train-*
- config_name: zh-classical
data_files:
- split: validation
path: zh-classical/validation-*
- split: test
path: zh-classical/test-*
- split: train
path: zh-classical/train-*
- config_name: zh-min-nan
data_files:
- split: validation
path: zh-min-nan/validation-*
- split: test
path: zh-min-nan/test-*
- split: train
path: zh-min-nan/train-*
- config_name: zh-yue
data_files:
- split: validation
path: zh-yue/validation-*
- split: test
path: zh-yue/test-*
- split: train
path: zh-yue/train-*
---
# Dataset Card for WikiANN
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Massively Multilingual Transfer for NER](https://github.com/afshinrahimi/mmner)
- **Repository:** [Massively Multilingual Transfer for NER](https://github.com/afshinrahimi/mmner)
- **Paper:** The original datasets come from the _Cross-lingual name tagging and linking for 282 languages_ [paper](https://www.aclweb.org/anthology/P17-1178/) by Xiaoman Pan et al. (2018). This version corresponds to the balanced train, dev, and test splits of the original data from the _Massively Multilingual Transfer for NER_ [paper](https://arxiv.org/abs/1902.00193) by Afshin Rahimi et al. (2019).
- **Leaderboard:**
- **Point of Contact:** [Afshin Rahimi](mailto:[email protected]) or [Lewis Tunstall](mailto:[email protected]) or [Albert Villanova del Moral]([email protected])
### Dataset Summary
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 176 of the 282 languages from the original WikiANN corpus.
### Supported Tasks and Leaderboards
- `named-entity-recognition`: The dataset can be used to train a model for named entity recognition in many languages, or evaluate the zero-shot cross-lingual capabilities of multilingual models.
### Languages
The dataset contains 176 languages, one in each of the configuration subsets. The corresponding BCP 47 language tags
are:
| | Language tag |
|:-------------------|:---------------|
| ace | ace |
| af | af |
| als | als |
| am | am |
| an | an |
| ang | ang |
| ar | ar |
| arc | arc |
| arz | arz |
| as | as |
| ast | ast |
| ay | ay |
| az | az |
| ba | ba |
| bar | bar |
| be | be |
| bg | bg |
| bh | bh |
| bn | bn |
| bo | bo |
| br | br |
| bs | bs |
| ca | ca |
| cdo | cdo |
| ce | ce |
| ceb | ceb |
| ckb | ckb |
| co | co |
| crh | crh |
| cs | cs |
| csb | csb |
| cv | cv |
| cy | cy |
| da | da |
| de | de |
| diq | diq |
| dv | dv |
| el | el |
| en | en |
| eo | eo |
| es | es |
| et | et |
| eu | eu |
| ext | ext |
| fa | fa |
| fi | fi |
| fo | fo |
| fr | fr |
| frr | frr |
| fur | fur |
| fy | fy |
| ga | ga |
| gan | gan |
| gd | gd |
| gl | gl |
| gn | gn |
| gu | gu |
| hak | hak |
| he | he |
| hi | hi |
| hr | hr |
| hsb | hsb |
| hu | hu |
| hy | hy |
| ia | ia |
| id | id |
| ig | ig |
| ilo | ilo |
| io | io |
| is | is |
| it | it |
| ja | ja |
| jbo | jbo |
| jv | jv |
| ka | ka |
| kk | kk |
| km | km |
| kn | kn |
| ko | ko |
| ksh | ksh |
| ku | ku |
| ky | ky |
| la | la |
| lb | lb |
| li | li |
| lij | lij |
| lmo | lmo |
| ln | ln |
| lt | lt |
| lv | lv |
| mg | mg |
| mhr | mhr |
| mi | mi |
| min | min |
| mk | mk |
| ml | ml |
| mn | mn |
| mr | mr |
| ms | ms |
| mt | mt |
| mwl | mwl |
| my | my |
| mzn | mzn |
| nap | nap |
| nds | nds |
| ne | ne |
| nl | nl |
| nn | nn |
| no | no |
| nov | nov |
| oc | oc |
| or | or |
| os | os |
| other-bat-smg | sgs |
| other-be-x-old | be-tarask |
| other-cbk-zam | cbk |
| other-eml | eml |
| other-fiu-vro | vro |
| other-map-bms | jv-x-bms |
| other-simple | en-basiceng |
| other-zh-classical | lzh |
| other-zh-min-nan | nan |
| other-zh-yue | yue |
| pa | pa |
| pdc | pdc |
| pl | pl |
| pms | pms |
| pnb | pnb |
| ps | ps |
| pt | pt |
| qu | qu |
| rm | rm |
| ro | ro |
| ru | ru |
| rw | rw |
| sa | sa |
| sah | sah |
| scn | scn |
| sco | sco |
| sd | sd |
| sh | sh |
| si | si |
| sk | sk |
| sl | sl |
| so | so |
| sq | sq |
| sr | sr |
| su | su |
| sv | sv |
| sw | sw |
| szl | szl |
| ta | ta |
| te | te |
| tg | tg |
| th | th |
| tk | tk |
| tl | tl |
| tr | tr |
| tt | tt |
| ug | ug |
| uk | uk |
| ur | ur |
| uz | uz |
| vec | vec |
| vep | vep |
| vi | vi |
| vls | vls |
| vo | vo |
| wa | wa |
| war | war |
| wuu | wuu |
| xmf | xmf |
| yi | yi |
| yo | yo |
| zea | zea |
| zh | zh |
## Dataset Structure
### Data Instances
This is an example in the "train" split of the "af" (Afrikaans language) configuration subset:
```python
{
'tokens': ['Sy', 'ander', 'seun', ',', 'Swjatopolk', ',', 'was', 'die', 'resultaat', 'van', '’n', 'buite-egtelike', 'verhouding', '.'],
'ner_tags': [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'langs': ['af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af', 'af'],
'spans': ['PER: Swjatopolk']
}
```
### Data Fields
- `tokens`: a `list` of `string` features.
- `langs`: a `list` of `string` features that correspond to the language of each token.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6).
- `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``<TAG>: <mention>``
### Data Splits
For each configuration subset, the data is split into "train", "validation" and "test" sets, each containing the
following number of examples:
| | Train | Validation | Test |
|:-------------|--------:|-------------:|-------:|
| ace | 100 | 100 | 100 |
| af | 5000 | 1000 | 1000 |
| als | 100 | 100 | 100 |
| am | 100 | 100 | 100 |
| an | 1000 | 1000 | 1000 |
| ang | 100 | 100 | 100 |
| ar | 20000 | 10000 | 10000 |
| arc | 100 | 100 | 100 |
| arz | 100 | 100 | 100 |
| as | 100 | 100 | 100 |
| ast | 1000 | 1000 | 1000 |
| ay | 100 | 100 | 100 |
| az | 10000 | 1000 | 1000 |
| ba | 100 | 100 | 100 |
| bar | 100 | 100 | 100 |
| bat-smg | 100 | 100 | 100 |
| be | 15000 | 1000 | 1000 |
| be-x-old | 5000 | 1000 | 1000 |
| bg | 20000 | 10000 | 10000 |
| bh | 100 | 100 | 100 |
| bn | 10000 | 1000 | 1000 |
| bo | 100 | 100 | 100 |
| br | 1000 | 1000 | 1000 |
| bs | 15000 | 1000 | 1000 |
| ca | 20000 | 10000 | 10000 |
| cbk-zam | 100 | 100 | 100 |
| cdo | 100 | 100 | 100 |
| ce | 100 | 100 | 100 |
| ceb | 100 | 100 | 100 |
| ckb | 1000 | 1000 | 1000 |
| co | 100 | 100 | 100 |
| crh | 100 | 100 | 100 |
| cs | 20000 | 10000 | 10000 |
| csb | 100 | 100 | 100 |
| cv | 100 | 100 | 100 |
| cy | 10000 | 1000 | 1000 |
| da | 20000 | 10000 | 10000 |
| de | 20000 | 10000 | 10000 |
| diq | 100 | 100 | 100 |
| dv | 100 | 100 | 100 |
| el | 20000 | 10000 | 10000 |
| eml | 100 | 100 | 100 |
| en | 20000 | 10000 | 10000 |
| eo | 15000 | 10000 | 10000 |
| es | 20000 | 10000 | 10000 |
| et | 15000 | 10000 | 10000 |
| eu | 10000 | 10000 | 10000 |
| ext | 100 | 100 | 100 |
| fa | 20000 | 10000 | 10000 |
| fi | 20000 | 10000 | 10000 |
| fiu-vro | 100 | 100 | 100 |
| fo | 100 | 100 | 100 |
| fr | 20000 | 10000 | 10000 |
| frr | 100 | 100 | 100 |
| fur | 100 | 100 | 100 |
| fy | 1000 | 1000 | 1000 |
| ga | 1000 | 1000 | 1000 |
| gan | 100 | 100 | 100 |
| gd | 100 | 100 | 100 |
| gl | 15000 | 10000 | 10000 |
| gn | 100 | 100 | 100 |
| gu | 100 | 100 | 100 |
| hak | 100 | 100 | 100 |
| he | 20000 | 10000 | 10000 |
| hi | 5000 | 1000 | 1000 |
| hr | 20000 | 10000 | 10000 |
| hsb | 100 | 100 | 100 |
| hu | 20000 | 10000 | 10000 |
| hy | 15000 | 1000 | 1000 |
| ia | 100 | 100 | 100 |
| id | 20000 | 10000 | 10000 |
| ig | 100 | 100 | 100 |
| ilo | 100 | 100 | 100 |
| io | 100 | 100 | 100 |
| is | 1000 | 1000 | 1000 |
| it | 20000 | 10000 | 10000 |
| ja | 20000 | 10000 | 10000 |
| jbo | 100 | 100 | 100 |
| jv | 100 | 100 | 100 |
| ka | 10000 | 10000 | 10000 |
| kk | 1000 | 1000 | 1000 |
| km | 100 | 100 | 100 |
| kn | 100 | 100 | 100 |
| ko | 20000 | 10000 | 10000 |
| ksh | 100 | 100 | 100 |
| ku | 100 | 100 | 100 |
| ky | 100 | 100 | 100 |
| la | 5000 | 1000 | 1000 |
| lb | 5000 | 1000 | 1000 |
| li | 100 | 100 | 100 |
| lij | 100 | 100 | 100 |
| lmo | 100 | 100 | 100 |
| ln | 100 | 100 | 100 |
| lt | 10000 | 10000 | 10000 |
| lv | 10000 | 10000 | 10000 |
| map-bms | 100 | 100 | 100 |
| mg | 100 | 100 | 100 |
| mhr | 100 | 100 | 100 |
| mi | 100 | 100 | 100 |
| min | 100 | 100 | 100 |
| mk | 10000 | 1000 | 1000 |
| ml | 10000 | 1000 | 1000 |
| mn | 100 | 100 | 100 |
| mr | 5000 | 1000 | 1000 |
| ms | 20000 | 1000 | 1000 |
| mt | 100 | 100 | 100 |
| mwl | 100 | 100 | 100 |
| my | 100 | 100 | 100 |
| mzn | 100 | 100 | 100 |
| nap | 100 | 100 | 100 |
| nds | 100 | 100 | 100 |
| ne | 100 | 100 | 100 |
| nl | 20000 | 10000 | 10000 |
| nn | 20000 | 1000 | 1000 |
| no | 20000 | 10000 | 10000 |
| nov | 100 | 100 | 100 |
| oc | 100 | 100 | 100 |
| or | 100 | 100 | 100 |
| os | 100 | 100 | 100 |
| pa | 100 | 100 | 100 |
| pdc | 100 | 100 | 100 |
| pl | 20000 | 10000 | 10000 |
| pms | 100 | 100 | 100 |
| pnb | 100 | 100 | 100 |
| ps | 100 | 100 | 100 |
| pt | 20000 | 10000 | 10000 |
| qu | 100 | 100 | 100 |
| rm | 100 | 100 | 100 |
| ro | 20000 | 10000 | 10000 |
| ru | 20000 | 10000 | 10000 |
| rw | 100 | 100 | 100 |
| sa | 100 | 100 | 100 |
| sah | 100 | 100 | 100 |
| scn | 100 | 100 | 100 |
| sco | 100 | 100 | 100 |
| sd | 100 | 100 | 100 |
| sh | 20000 | 10000 | 10000 |
| si | 100 | 100 | 100 |
| simple | 20000 | 1000 | 1000 |
| sk | 20000 | 10000 | 10000 |
| sl | 15000 | 10000 | 10000 |
| so | 100 | 100 | 100 |
| sq | 5000 | 1000 | 1000 |
| sr | 20000 | 10000 | 10000 |
| su | 100 | 100 | 100 |
| sv | 20000 | 10000 | 10000 |
| sw | 1000 | 1000 | 1000 |
| szl | 100 | 100 | 100 |
| ta | 15000 | 1000 | 1000 |
| te | 1000 | 1000 | 1000 |
| tg | 100 | 100 | 100 |
| th | 20000 | 10000 | 10000 |
| tk | 100 | 100 | 100 |
| tl | 10000 | 1000 | 1000 |
| tr | 20000 | 10000 | 10000 |
| tt | 1000 | 1000 | 1000 |
| ug | 100 | 100 | 100 |
| uk | 20000 | 10000 | 10000 |
| ur | 20000 | 1000 | 1000 |
| uz | 1000 | 1000 | 1000 |
| vec | 100 | 100 | 100 |
| vep | 100 | 100 | 100 |
| vi | 20000 | 10000 | 10000 |
| vls | 100 | 100 | 100 |
| vo | 100 | 100 | 100 |
| wa | 100 | 100 | 100 |
| war | 100 | 100 | 100 |
| wuu | 100 | 100 | 100 |
| xmf | 100 | 100 | 100 |
| yi | 100 | 100 | 100 |
| yo | 100 | 100 | 100 |
| zea | 100 | 100 | 100 |
| zh | 20000 | 10000 | 10000 |
| zh-classical | 100 | 100 | 100 |
| zh-min-nan | 100 | 100 | 100 |
| zh-yue | 20000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
The original 282 datasets are associated with this article
```
@inproceedings{pan-etal-2017-cross,
title = "Cross-lingual Name Tagging and Linking for 282 Languages",
author = "Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-1178",
doi = "10.18653/v1/P17-1178",
pages = "1946--1958",
abstract = "The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating {``}silver-standard{''} annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.",
}
```
while the 176 languages supported in this version are associated with the following article
```
@inproceedings{rahimi-etal-2019-massively,
title = "Massively Multilingual Transfer for {NER}",
author = "Rahimi, Afshin and
Li, Yuan and
Cohn, Trevor",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1015",
pages = "151--164",
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) and [@rabeehk](https://github.com/rabeehk) for adding this dataset. |
stingning/ultrachat | stingning | 2024-02-22T02:26:29Z | 2,265 | 438 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"conversational",
"text-generation"
] | 2023-04-20T15:15:28Z | null | ---
license: mit
task_categories:
- conversational
- text-generation
language:
- en
size_categories:
- 1M<n<10M
pretty_name: UltraChat
---
# Dataset Card for Dataset Name
## Dataset Description
An open-source, large-scale, and multi-round dialogue data powered by Turbo APIs. In consideration of factors such as safeguarding privacy, **we do not directly use any data available on the Internet as prompts**.
To ensure generation quality, two separate ChatGPT Turbo APIs are adopted in generation, where one plays the role of the user to generate queries and the other generates the response.
We instruct the user model with carefully designed prompts to mimic human user behavior and call the two APIs iteratively. The generated dialogues undergo further post-processing and filtering.
ULtraChat is composed of three sectors:
- 🌏 **Questions about the World**: The dialogue data in this sector is derived from a wide range of inquiries related to concepts, entities, and objects from the real world. The topics covered are extensive, spanning areas such as technology, art, and entrepreneurship.
- ✍🏻 **Writing and Creation**: The dialogue data in this sector is driven by the demands for writing/creation from scratch, and encompasses any tasks that an AI assistant may aid within the creative process, spanning from email composition to crafting narratives and plays, and beyond.
- 📋 **Assistance on Existent Materials**: The dialogue data in this sector is generated based on existing materials, including but not limited to rewriting, continuation, summarization, and inference, covering a diverse range of topics.
- Repository: [UltraChat](https://github.com/thunlp/UltraChat)
- Explorer: [plain-explorer](http://39.101.77.220/), [Nomic-AI-Atlas-Explorer](https://atlas.nomic.ai/map/0ce65783-c3a9-40b5-895d-384933f50081/a7b46301-022f-45d8-bbf4-98107eabdbac)
## Dataset Structure
Each line in the downloaded data file is a json dict containing the data id and dialogue data in a list format. Below is an example line.
```
{
"id": "0",
"data": [
"How can cross training benefit groups like runners, swimmers, or weightlifters?",
"Cross training can benefit groups like runners, swimmers, or weightlifters in the following ways: ...",
"That makes sense. I've been wanting to improve my running time, but I never thought about incorporating strength training. Do you have any recommendations for specific exercises?",
"Sure, here are some strength training exercises that can benefit runners: ...",
"Hmm, I'm not really a fan of weightlifting though. Can I incorporate other forms of exercise into my routine to improve my running time?",
"Yes, absolutely! ...",
"..."
]
}
```
### Citation Information
```bibtex
@article{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ding, Ning and Chen, Yulin and Xu, Bokai and Qin, Yujia and Zheng, Zhi and Hu, Shengding and Liu, Zhiyuan and Sun, Maosong and Zhou, Bowen},
journal={arXiv preprint arXiv:2305.14233},
year={2023}
}
``` |
neural-bridge/rag-dataset-12000 | neural-bridge | 2024-02-05T18:25:13Z | 1,441 | 136 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"retrieval-augmented-generation"
] | [
"question-answering"
] | 2023-10-02T17:18:39Z | null | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_examples: 9600
- name: test
num_examples: 2400
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
license: apache-2.0
tags:
- retrieval-augmented-generation
---
# **Retrieval-Augmented Generation (RAG) Dataset 12000**
**Retrieval-Augmented Generation (RAG) Dataset 12000 is an English dataset designed for RAG-optimized models, built by [Neural Bridge AI](https://www.neuralbridge.ai/), and released under [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).**
## **Dataset Description**
#### Dataset Summary
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by allowing them to consult an external authoritative knowledge base before generating responses. This approach significantly boosts the models' ability to produce relevant, accurate, and context-specific output by extending their capabilities to specialized domains or an organization's internal data, without the need for retraining. RAG offers a cost-effective method to leverage the vast data processing power of LLMs, equipped with billions of parameters, for tasks such as question-answering, language translation, and sentence completion, ensuring that the output is always up-to-date and applicable to various contexts.
RAG's importance lies in its potential to address the inherent challenges of LLMs, such as unpredictability in responses, reliance on static and potentially outdated training data, and the risk of disseminating incorrect or non-authoritative information. These issues can negatively affect user trust in AI-powered applications, making RAG's ability to guide LLMs toward authoritative sources for information retrieval invaluable.
RAG has multiple benefits, including cost-effective implementation and maintenance, access to current information, improved user trust through accurate information and source attribution, and greater control for developers over the information retrieval process. This approach allows for the dynamic updating of LLMs with the latest research, statistics, or news, directly addressing the challenges of maintaining relevancy and accuracy in rapidly changing knowledge landscapes. Additionally, it empowers organizations to deploy generative AI more confidently across a wider range of applications, enhancing both the user experience and the reliability of AI-driven interactions.
Retrieval-Augmented Generation (RAG) Dataset 12000 dataset is a triple-feature collection, with each entry containing a "context", "question", and "answer" fields, designed to help build RAG-optimized models. This data consists of 12000 entries, and the context data is from [Falcon RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb).
```python
from datasets import load_dataset
rag_dataset = load_dataset("neural-bridge/rag-dataset-12000")
```
#### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## **Dataset Structure**
#### Data Instances
A typical data point comprises a context, a question about the context, and an answer for the question. The context is obtained from [Falcon RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), and the question and answer for each data point are generated by GPT-4.
An example from the dataset looks like the following:
```
{
context: ...
question: ...
answer: ...
}
```
#### Data Fields
- `context`: A string consisting of a range of tokens.
- `question`: A string consisting of a question related to the context.
- `answer`: A string consisting of an answer for the question.
#### Data Splits
The data is split into a training and test set. The split sizes are as follow:
| | Train | Test |
| ----- | ------ | ---- |
| RAG Dataset 12000 | 9600 | 2400 |
## Source Data
The data points in the dataset are from the [Falcon RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) dataset.
## **Neural Bridge AI RAG Datasets Index**
| Model | Link |
| ----- | ------ |
| RAG Full 20000 | [link](https://huggingface.co/datasets/neural-bridge/rag-full-20000) |
| RAG Dataset 12000 | [link](https://huggingface.co/datasets/neural-bridge/rag-dataset-12000) |
| RAG Dataset 1200 | [link](https://huggingface.co/datasets/neural-bridge/rag-dataset-1200) |
| RAG Hallucination Dataset 1000 | [link](https://huggingface.co/datasets/neural-bridge/rag-hallucination-dataset-1000) |
## **License**
This public extract is made available under [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html). Users should also abide to the [Falcon RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) ToU. |
xingyaoww/code-act | xingyaoww | 2024-02-05T05:23:24Z | 324 | 65 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.01030",
"region:us",
"llm-agent",
"llm",
"instruction-tuning"
] | [
"text-generation"
] | 2024-01-15T08:59:02Z | 2 | ---
configs:
- config_name: default
data_files:
- split: codeact
path: data/codeact-*
- split: general
path: data/general-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: codeact
num_bytes: 34936511
num_examples: 7139
- name: general
num_bytes: 250817144
num_examples: 71246
download_size: 123084833
dataset_size: 285753655
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- llm-agent
- llm
- instruction-tuning
size_categories:
- 1K<n<10K
---
<h1 align="center"> Executable Code Actions Elicit Better LLM Agents </h1>
<p align="center">
<a href="https://github.com/xingyaoww/code-act">💻 Code</a>
•
<a href="https://arxiv.org/abs/2402.01030">📃 Paper</a>
•
<a href="https://huggingface.co/datasets/xingyaoww/code-act" >🤗 Data (CodeActInstruct)</a>
•
<a href="https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1" >🤗 Model (CodeActAgent-Mistral-7b-v0.1)</a>
•
<a href="https://chat.xwang.dev/">🤖 Chat with CodeActAgent!</a>
</p>
We propose to use executable Python **code** to consolidate LLM agents’ **act**ions into a unified action space (**CodeAct**).
Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations (e.g., code execution results) through multi-turn interactions.

## Why CodeAct?
Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark [M<sup>3</sup>ToolEval](docs/EVALUATION.md) shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!

*Comparison between CodeAct and Text / JSON as action.*

*Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*
## 📁 CodeActInstruct
We collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at [huggingface dataset 🤗](https://huggingface.co/datasets/xingyaoww/code-act). Please refer to the paper and [this section](#-data-generation-optional) for details of data collection.

*Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*
## 🪄 CodeActAgent
Trained on **CodeActInstruct** and general conversaions, **CodeActAgent** excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:
- **CodeActAgent-Mistral-7b-v0.1** (recommended, [model link](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1)): using Mistral-7b-v0.1 as the base model with 32k context window.
- **CodeActAgent-Llama-7b** ([model link](https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b)): using Llama-2-7b as the base model with 4k context window.

*Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*
Please check out [our paper](TODO) and [code](https://github.com/xingyaoww/code-act) for more details about data collection, model training, and evaluation.
## 📚 Citation
```bibtex
@misc{wang2024executable,
title={Executable Code Actions Elicit Better LLM Agents},
author={Xingyao Wang and Yangyi Chen and Lifan Yuan and Yizhe Zhang and Yunzhu Li and Hao Peng and Heng Ji},
year={2024},
eprint={2402.01030},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Codec-SUPERB/fluent_speech_commands_synth | Codec-SUPERB | 2024-02-01T15:29:11Z | 11,629 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-12-27T05:51:14Z | null | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: original
num_bytes: 2220326464.0
num_examples: 30043
- name: academicodec_hifi_16k_320d
num_bytes: 2212154504.0
num_examples: 30043
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 2212154504.0
num_examples: 30043
- name: academicodec_hifi_24k_320d
num_bytes: 3322180744.0
num_examples: 30043
- name: audiodec_24k_320d
num_bytes: 3338935944.0
num_examples: 30043
- name: dac_16k
num_bytes: 2221347926.0
num_examples: 30043
- name: dac_24k
num_bytes: 3329678726.0
num_examples: 30043
- name: dac_44k
num_bytes: 6114326168.0
num_examples: 30043
- name: encodec_24k_12bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_1_5bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_24bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_3bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_6bps
num_bytes: 3329678726.0
num_examples: 30043
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 2219150286.0
num_examples: 30043
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 2219150286.0
num_examples: 30043
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 2221347926.0
num_examples: 30043
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 2221347926.0
num_examples: 30043
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 2221347926.0
num_examples: 30043
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 2221347926.0
num_examples: 30043
- name: speech_tokenizer_16k
num_bytes: 2230445064.0
num_examples: 30043
download_size: 21108462066
dataset_size: 57173635950.0
---
# Dataset Card for "fluent_speech_commands_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awacke1/ICD10-Clinical-Terminology | awacke1 | 2024-01-31T01:34:01Z | 121 | 15 | [
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-02-28T12:20:41Z | 1 | ---
license: mit
---
ICD10-Clinical-Terminology
pyarrow fast search demonstration for context AI MMoE |
biglam/hmd_newspapers | biglam | 2024-01-30T12:06:17Z | 3,123 | 9 | [
"task_categories:text-generation",
"language:en",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"newspapers"
] | [
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
dataset_info:
features:
- name: source
dtype: string
- name: title
dtype: string
- name: location
dtype: string
- name: date
dtype: timestamp[s]
- name: item_type
dtype: string
- name: word_count
dtype: int32
- name: ocr_quality_mean
dtype: float64
- name: ocr_quality_sd
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14304741164
num_examples: 3065408
download_size: 9682476047
dataset_size: 14304741164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
task_categories:
- text-generation
language:
- en
tags:
- newspapers
pretty_name: Heritage Made Digital Newspapers
size_categories:
- 1M<n<10M
---
# Dataset Card for Heritage Made Digital Newspapers
## Table of Contents
- [Dataset Card for Heritage Made Digital Newspapers](#dataset-card-for-heritage-made-digital-newspapers)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://bl.iro.bl.uk/?locale=en
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains text extracted at the article level from historic digitised newspapers from the [Heritage Made Digital](https://bl.iro.bl.uk/collections/9a6a4cdd-2bfe-47bb-8c14-c0a5d100501f?locale=en) newspaper digitisation program at the [British Library](https://www.bl.uk/). The newspapers in the dataset were published between 1800 and 1896. This dataset contains ~2.5 billion tokens and 3,065,408 articles.
The dataset contains text generated from Optical Character Recognition software on digitised newspaper pages. This dataset includes the plain text from the OCR alongside some minimal metadata associated with the newspaper from which the text is derived and OCR confidence score information generated from the OCR software.
### Supported Tasks and Leaderboards
This dataset can be used for:
- historical research and digital humanities research
- training language models
- training historic language models
Whilst this dataset can be used for all of these tasks, it is important to understand that the dataset was not constructed in a representative way so it contains biases in terms of the newspapers and articles that are included (more on this below).
### Languages
The text in this dataset is in English that has been recognised by the OCR software. The OCR software used is generic commercial OCR software that has not been trained on historic newspapers. There are therefore many errors in the text. Some of the OCR in this text will be of such poor quality that is is incomprehensible to a human reader.
## Dataset Structure
### Data Instances
Each row in the dataset is an article from a newspaper as recognised by an OLR (Optical Layout Recognition) step in the digitisation process.
### Data Splits
There is one split in this dataset, the training split.
## Dataset Creation
### Curation Rationale
This dataset consists of public-domain newspapers published in the UK during the 19th Century. The majority of newspapers digitised in the UK are not freely available (even if they are out of copyright). The newspapers in this dataset were digitised specifically to be freely available but also to meet preservation goals for newspapers in poor condition. As a result, the newspapers chosen for digitisation are biased toward poor quality physical newspapers. This may in turn result in worse OCR.
### Source Data
The source data for this dataset is the digitised newspapers from the [Heritage Made Digital](https://bl.iro.bl.uk/collections/9a6a4cdd-2bfe-47bb-8c14-c0a5d100501f?locale=en) newspaper digitisation program. The newspapers in the dataset were published between 1800 and 1870.
### Dataset Curators
The original digitisation was carried out by the British Library. The dataset was created by the British Library in partnership with Findmypast.
This dataset was created by [@davanstrien](https://huggingface.co/davanstrien).
### Licensing Information
The newspapers in this dataset are in the public domain. The dataset is licensed under a [Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/) license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
biglam/blbooks-parquet | biglam | 2024-01-30T09:41:53Z | 19,514 | 6 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:other",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"source_datasets:blbooks",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"license:cc0-1.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"digital-humanities-research"
] | [
"text-generation",
"fill-mask",
"other"
] | 2023-07-13T12:48:12Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- blbooks
task_categories:
- text-generation
- fill-mask
- other
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: British Library Books
tags:
- digital-humanities-research
dataset_info:
- config_name: '1500_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30452067039
num_examples: 14011953
download_size: 10486035662
dataset_size: 30452067039
- config_name: '1510_1699'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 107667469
num_examples: 51982
download_size: 42320165
dataset_size: 107667469
- config_name: '1700_1799'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 267117831
num_examples: 178224
download_size: 95137895
dataset_size: 267117831
- config_name: 1700s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 266382657
num_examples: 178224
download_size: 95137895
dataset_size: 266382657
- config_name: '1800_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30077284377
num_examples: 13781747
download_size: 10348577602
dataset_size: 30077284377
- config_name: 1800s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30020434670
num_examples: 13781747
download_size: 10348577602
dataset_size: 30020434670
- config_name: all
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30394267732
num_examples: 14011953
download_size: 10486035662
dataset_size: 30394267732
- config_name: default
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30451982919
num_examples: 14011953
download_size: 16402835443
dataset_size: 30451982919
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for British Library Books
This dataset is the same as https://huggingface.co/datasets/TheBritishLibrary/blbooks, however, this version is stored as parquet to avoid needing to run a datasets script. This also makes loading this dataset much quicker.
## Table of Contents
- [Dataset Card for British Library Books](#dataset-card-for-British-Library-Books)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language model training](#language-model-training)
- [Supervised tasks](#supervised-tasks)
- [Languages](#languages)
- [Language change](#language-change)
- [Optical Character Recognition](#optical-character-recognition)
- [OCR word confidence](#ocr-word-confidence)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Date normalization](#date-normalization)
- [Metadata included](#metadata-included)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Colonialism](#colonialism)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.bl.uk/collection-guides/digitised-printed-books
- **Repository:** https://doi.org/10.21250/db14
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
This dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages. The dataset consits of around *7.67 billion words*.
While the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade:
| | page count |
| ---- | ---------- |
| 1510 | 94 |
| 1520 | 32 |
| 1540 | 184 |
| 1550 | 16 |
| 1580 | 276 |
| 1590 | 540 |
| 1600 | 1117 |
| 1610 | 1132 |
| 1620 | 1856 |
| 1630 | 9274 |
| 1640 | 4232 |
| 1650 | 2944 |
| 1660 | 5858 |
| 1670 | 11415 |
| 1680 | 8348 |
| 1690 | 13756 |
| 1700 | 10160 |
| 1710 | 9556 |
| 1720 | 10314 |
| 1730 | 13282 |
| 1740 | 10778 |
| 1750 | 12001 |
| 1760 | 21415 |
| 1770 | 28490 |
| 1780 | 32676 |
| 1790 | 50014 |
| 1800 | 307806 |
| 1810 | 478008 |
| 1820 | 589419 |
| 1830 | 681212 |
| 1840 | 1113473 |
| 1850 | 1726108 |
| 1860 | 1725407 |
| 1870 | 2069089 |
| 1880 | 2585159 |
| 1890 | 3365031 |
[More Information Needed]
### Supported Tasks and Leaderboards
This collection has been previously used across various digital history and humanities projects since being published.
The dataset consists of text and a range of metadata associated with this text. This metadata includes:
- date of publication
- place of publication
- country of publication
- language
- OCR quality
- physical description of the original physical item
#### Language model training
As a relatively large dataset, `blbooks` provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on:
- specific time-periods
- specific languages
- certain OCR quality thresholds
The above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages.
#### Supervised tasks
Whilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata.
### Languages
This dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is:
| Language | Pages |
| --------------------- | -------- |
| English | 10039463 |
| French | 1442929 |
| German | 1172793 |
| Spanish | 286778 |
| Italian | 214255 |
| Dutch | 204759 |
| Russian | 193347 |
| Danish | 93366 |
| Hungarian | 88094 |
| Swedish | 76225 |
| Polish | 58901 |
| Greek, Modern (1453-) | 26104 |
| Latin | 25611 |
| Portuguese | 25410 |
| Czech | 20160 |
| Bulgarian | 7891 |
| Finnish | 5677 |
| Irish | 2743 |
| Serbian | 1975 |
| Romanian | 1544 |
| Norwegian Nynorsk | 1398 |
| Croatian | 1306 |
| Norwegian | 1227 |
| Icelandic | 902 |
| Slovak | 840 |
| Lithuanian | 714 |
| Welsh | 580 |
| Slovenian | 545 |
| Indonesian | 418 |
| Cornish | 223 |
This breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below).
#### Language change
The publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant [language change](https://en.wikipedia.org/wiki/Language_change) might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below).
#### Optical Character Recognition
The digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an `i` is mistaken for an `l`, at the word level or across significant passages of text.
The books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from:
- the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read
- damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page
- poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the [gutter](https://www.abaa.org/glossary/entry/gutter) of the book.
- the language used in the books may differ from the languages OCR software is predominantly trained to recognise.
##### OCR word confidence
Many OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality.
An example of text with a high (over 90% mean word confidence score):
```
8 direction to the Conduit, round which is a wide open space, and a good broad pavement called the Parade. It commands a pleasant peep of the slopes and terrace throughout its entire length. The street continuing from the Conduit, in the same general direction, was known anciently as Lodborne Lane, and is now named South Street. From the Conduit two other streets, at right angles to these, are Long Street, leading Eastwards, and Half-Moon Street (formerly Lodborne), leading to Westbury, Trendle Street, and the Horsecastles Road.
```
An example of text with a score below 40%:
```
Hannover. Schrift und Druck von Fr. CultniTmn,',
"LeMNs'utluirui.",
'ü 8u«llim» M^äalßwi 01de!lop 1<M.',
'p^dnalmw vom Xr^u/e, lpiti>»**Kmm lie« !»^2!M kleine lii!<! (,«>* ttünee!<»e^ v»n tndzt Lievclum, 1872,
```
The quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence:
| decade | mean_wc_ocr |
| ------ | ----------- |
| 1510 | 0.499151 |
| 1520 | 0.544818 |
| 1540 | 0.511589 |
| 1550 | 0.4505 |
| 1580 | 0.321858 |
| 1590 | 0.461282 |
| 1600 | 0.467318 |
| 1610 | 0.495895 |
| 1620 | 0.501257 |
| 1630 | 0.49766 |
| 1640 | 0.512095 |
| 1650 | 0.528534 |
| 1660 | 0.521014 |
| 1670 | 0.592575 |
| 1680 | 0.583901 |
| 1690 | 0.567202 |
| 1700 | 0.575175 |
| 1710 | 0.61436 |
| 1720 | 0.627725 |
| 1730 | 0.658534 |
| 1740 | 0.64214 |
| 1750 | 0.657357 |
| 1760 | 0.6389 |
| 1770 | 0.651883 |
| 1780 | 0.632326 |
| 1790 | 0.664279 |
| 1800 | 0.682338 |
| 1810 | 0.708915 |
| 1820 | 0.730015 |
| 1830 | 0.730973 |
| 1840 | 0.713886 |
| 1850 | 0.697106 |
| 1860 | 0.696701 |
| 1870 | 0.717233 |
| 1880 | 0.733331 |
| 1890 | 0.762364 |
As might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time.
As with time, the mean word confidence of the OCR software varies across languages:
| Language_1 | mean_wc_ocr |
| --------------------- | ----------- |
| Croatian | 0.755565 |
| Welsh | 0.7528 |
| Norwegian Nynorsk | 0.751648 |
| Slovenian | 0.746007 |
| French | 0.740772 |
| Finnish | 0.738032 |
| Czech | 0.737849 |
| Hungarian | 0.736076 |
| Dutch | 0.734977 |
| Cornish | 0.733682 |
| Danish | 0.733106 |
| English | 0.733037 |
| Irish | 0.732658 |
| Portuguese | 0.727746 |
| Spanish | 0.725111 |
| Icelandic | 0.724427 |
| Italian | 0.715839 |
| Swedish | 0.715633 |
| Polish | 0.715133 |
| Lithuanian | 0.700003 |
| Bulgarian | 0.694657 |
| Romanian | 0.692957 |
| Latin | 0.689022 |
| Russian | 0.685847 |
| Serbian | 0.674329 |
| Slovak | 0.66739 |
| Greek, Modern (1453-) | 0.632195 |
| German | 0.631457 |
| Indonesian | 0.6155 |
| Norwegian | 0.597987 |
Again, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models.
[More Information Needed]
## Dataset Structure
The dataset has a number of configurations relating to the different dates of publication in the underlying data:
- `1500_1899`: this configuration covers all years
- `1800_1899`: this configuration covers the years between 1800 and 1899
- `1700_1799`: this configuration covers the years between 1700 and 1799
- `1510_1699`: this configuration covers the years between 1510 and 1699
### Configuration option
All of the configurations have an optional keyword argument `skip_empty_pages` which is set to `True` by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text.
For many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing `skip_empty_pages=False` when loading the dataset will enable this option.
### Data Instances
An example data instance:
```python
{'Country of publication 1': 'England',
'Language_1': 'English',
'Language_2': None,
'Language_3': None,
'Language_4': None,
'Physical description': None,
'Publisher': None,
'all Countries of publication': 'England',
'all names': 'Settle, Elkanah [person]',
'date': 1689,
'empty_pg': True,
'mean_wc_ocr': 0.0,
'multi_language': False,
'name': 'Settle, Elkanah',
'pg': 1,
'place': 'London',
'raw_date': '1689',
'record_id': '001876770',
'std_wc_ocr': 0.0,
'text': None,
‘title’: ‘The Female Prelate: being the history and the life and death of Pope Joan. A tragedy [in five acts and in verse] . Written by a Person of Quality [i.e. Elkanah Settle]’}
```
Each instance in the dataset represents a single page from an original digitised book.
### Data Fields
Included in this dataset are:
| Field | Data Type | Description |
| ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------- |
| record_id | string | British Library ID for the item |
| date | int | parsed/normalised year for the item. i.e. 1850 |
| raw_date | string | the original raw date for an item i.e. 1850- |
| title | string | title of the book |
| place | string | Place of publication, i.e. London |
| empty_pg | bool | whether page contains text |
| text | string | OCR generated text for a page |
| pg | int | page in original book the instance refers to |
| mean_wc_ocr | float | mean word confidence values for the page |
| std_wc_ocr | float | standard deviation of the word confidence values for the page |
| name | string | name associated with the item (usually author) |
| all names | string | all names associated with a publication |
| Publisher | string | publisher of the book |
| Country of publication 1 | string | first country associated with publication |
| all Countries of publication | string | all countries associated with a publication |
| Physical description | string | physical description of the item (size). This requires some normalisation before use and isn’t always present |
| Language_1 | string | first language associated with the book, this is usually present |
| Language_2 | string | |
| Language_3 | string | |
| Language_4 | string | |
| multi_language | bool | |
Some of these fields are not populated a large proportion of the time. You can get some sense of this from this [Pandas Profiling](https://github.com/pandas-profiling/pandas-profiling) [report](https://davanstrien.github.io/BL-datasets-pandas-profile-reports/pandas_profile_report_MS_digitised_books_2021-01-09.html)
The majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in [Automated Language Identification of Bibliographic Resources](https://doi.org/10.1080/01639374.2019.1700201). It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021.
[More Information Needed]
### Data Splits
This dataset contains a single split `train`.
## Dataset Creation
**Note** this section is a work in progress.
### Curation Rationale
The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. [Mass digitisation](https://en.wikipedia.org/wiki/Category:Mass_digitization), i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to):
- copyright status
- preservation needs
- the size of an item, very large and very small items are often hard to digitise quickly
These criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.
All of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period.
[More Information Needed]
### Source Data
The source data (physical items) includes a variety of resources (predominantly monographs) held by the [British Library](bl.uk/](https://bl.uk/). The British Library is a [Legal Deposit](https://www.bl.uk/legal-deposit/about-legal-deposit) library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” [source](https://www.bl.uk/legal-deposit/about-legal-deposit).
The source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links
[More Information Needed]
#### Initial Data Collection and Normalization
This version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link.
There are a few decisions made in the above processing steps worth highlighting in particular:
##### Date normalization
The metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. `1850-1860`. The `date` field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The `raw_date` field includes the unprocessed date string.
##### Metadata included
The metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable.
#### Who are the source language producers?
[More Information Needed]
### Annotations
This dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
There a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc.
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
The text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books.
Examples of book titles that appear in the data (these are randomly sampled from all titles):
- ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’,
- “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”,
- ‘The fan. A poem’,
- ‘Grif; a story of Australian Life’,
- ‘Calypso; a masque: in three acts, etc’,
- ‘Tales Uncle told [With illustrative woodcuts.]’,
- 'Questings',
- 'Home Life on an Ostrich Farm. With ... illustrations’,
- ‘Bulgarya i Bulgarowie’,
- 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',
- ‘The Corsair, a tale’,
‘Poems ... With notes [With a portrait.]’,
- ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’,
- “The World of Thought. A novel. By the author of ‘Before I began to speak.’”,
- 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']
While using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.
#### Colonialism
Even in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published.
Searching for the string `India` in the titles and randomly sampling 10 titles returns:
- “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”,
- ‘A Winter in India and Malaysia among the Methodist Missions’,
- “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”,
- ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’,
- "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”,
- ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’,
- “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”,
- ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’,
- ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’,
- ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’]
Searching form the string `Africa` in the titles and randomly sampling 10 titles returns:
- ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',
- ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’,
- ‘Diamonds and Gold in South Africa ... With maps, etc’,
- ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’,
- ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’,
- ‘Side Lights on South Africa ... With a map, etc’,
- ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’,
- ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’,
- ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’,
- ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’]
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The books are licensed under the [CC Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/) license.
### Citation Information
```bibtext
@misc{bBritishLibraryBooks2021,
author = {British Library Labs},
title = {Digitised Books. c. 1510 - c. 1900. JSONL (OCR derived text + metadata)},
year = {2021},
publisher = {British Library},
howpublished={https://doi.org/10.23636/r7w6-zy15}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. |
corto-ai/handwritten-text | corto-ai | 2024-01-29T00:25:32Z | 141 | 14 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-01-29T00:25:16Z | 2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 167800178.75
num_examples: 6482
- name: valid
num_bytes: 24887435.0
num_examples: 976
- name: test
num_bytes: 73857843.625
num_examples: 2915
download_size: 265569932
dataset_size: 266545457.375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
google/code_x_glue_ct_code_to_text | google | 2024-01-24T15:09:09Z | 1,709 | 69 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:other-programming-languages",
"source_datasets:original",
"language:code",
"language:en",
"license:c-uda",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"code-to-text"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
- en
license:
- c-uda
multilinguality:
- other-programming-languages
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: CodeXGlueCtCodeToText
config_names:
- go
- java
- javascript
- php
- python
- ruby
tags:
- code-to-text
dataset_info:
- config_name: go
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 342243143
num_examples: 167288
- name: validation
num_bytes: 13721860
num_examples: 7325
- name: test
num_bytes: 16328406
num_examples: 8122
download_size: 121341698
dataset_size: 372293409
- config_name: java
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 452553835
num_examples: 164923
- name: validation
num_bytes: 13366344
num_examples: 5183
- name: test
num_bytes: 29080753
num_examples: 10955
download_size: 154701399
dataset_size: 495000932
- config_name: javascript
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 160860431
num_examples: 58025
- name: validation
num_bytes: 10337344
num_examples: 3885
- name: test
num_bytes: 10190713
num_examples: 3291
download_size: 65788314
dataset_size: 181388488
- config_name: php
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 614654499
num_examples: 241241
- name: validation
num_bytes: 33283045
num_examples: 12982
- name: test
num_bytes: 35374993
num_examples: 14014
download_size: 219692158
dataset_size: 683312537
- config_name: python
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 813663148
num_examples: 251820
- name: validation
num_bytes: 46888564
num_examples: 13914
- name: test
num_bytes: 50659688
num_examples: 14918
download_size: 325551862
dataset_size: 911211400
- config_name: ruby
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 51956439
num_examples: 24927
- name: validation
num_bytes: 2821037
num_examples: 1400
- name: test
num_bytes: 2671551
num_examples: 1261
download_size: 21921316
dataset_size: 57449027
configs:
- config_name: go
data_files:
- split: train
path: go/train-*
- split: validation
path: go/validation-*
- split: test
path: go/test-*
- config_name: java
data_files:
- split: train
path: java/train-*
- split: validation
path: java/validation-*
- split: test
path: java/test-*
- config_name: javascript
data_files:
- split: train
path: javascript/train-*
- split: validation
path: javascript/validation-*
- split: test
path: javascript/test-*
- config_name: php
data_files:
- split: train
path: php/train-*
- split: validation
path: php/validation-*
- split: test
path: php/test-*
- config_name: python
data_files:
- split: train
path: python/train-*
- split: validation
path: python/validation-*
- split: test
path: python/test-*
- config_name: ruby
data_files:
- split: train
path: ruby/train-*
- split: validation
path: ruby/validation-*
- split: test
path: ruby/test-*
---
# Dataset Card for "code_x_glue_ct_code_to_text"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
### Dataset Summary
CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English.
### Supported Tasks and Leaderboards
- `machine-translation`: The dataset can be used to train a model for automatically generating **English** docstrings for code.
### Languages
- Go **programming** language
- Java **programming** language
- Javascript **programming** language
- PHP **programming** language
- Python **programming** language
- Ruby **programming** language
- English **natural** language
## Dataset Structure
### Data Instances
#### go
An example of 'test' looks as follows.
```
{
"code": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}",
"code_tokens": ["func", "NewSTM", "(", "c", "*", "v3", ".", "Client", ",", "apply", "func", "(", "STM", ")", "error", ",", "so", "...", "stmOption", ")", "(", "*", "v3", ".", "TxnResponse", ",", "error", ")", "{", "opts", ":=", "&", "stmOptions", "{", "ctx", ":", "c", ".", "Ctx", "(", ")", "}", "\n", "for", "_", ",", "f", ":=", "range", "so", "{", "f", "(", "opts", ")", "\n", "}", "\n", "if", "len", "(", "opts", ".", "prefetch", ")", "!=", "0", "{", "f", ":=", "apply", "\n", "apply", "=", "func", "(", "s", "STM", ")", "error", "{", "s", ".", "Get", "(", "opts", ".", "prefetch", "...", ")", "\n", "return", "f", "(", "s", ")", "\n", "}", "\n", "}", "\n", "return", "runSTM", "(", "mkSTM", "(", "c", ",", "opts", ")", ",", "apply", ")", "\n", "}"],
"docstring": "// NewSTM initiates a new STM instance, using serializable snapshot isolation by default.",
"docstring_tokens": ["NewSTM", "initiates", "a", "new", "STM", "instance", "using", "serializable", "snapshot", "isolation", "by", "default", "."],
"func_name": "NewSTM",
"id": 0,
"language": "go",
"original_string": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}",
"path": "clientv3/concurrency/stm.go",
"repo": "etcd-io/etcd",
"sha": "616592d9ba993e3fe9798eef581316016df98906",
"url": "https://github.com/etcd-io/etcd/blob/616592d9ba993e3fe9798eef581316016df98906/clientv3/concurrency/stm.go#L89-L102"
}
```
#### java
An example of 'test' looks as follows.
```
{
"code": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }",
"code_tokens": ["protected", "final", "void", "fastPathOrderedEmit", "(", "U", "value", ",", "boolean", "delayError", ",", "Disposable", "disposable", ")", "{", "final", "Observer", "<", "?", "super", "V", ">", "observer", "=", "downstream", ";", "final", "SimplePlainQueue", "<", "U", ">", "q", "=", "queue", ";", "if", "(", "wip", ".", "get", "(", ")", "==", "0", "&&", "wip", ".", "compareAndSet", "(", "0", ",", "1", ")", ")", "{", "if", "(", "q", ".", "isEmpty", "(", ")", ")", "{", "accept", "(", "observer", ",", "value", ")", ";", "if", "(", "leave", "(", "-", "1", ")", "==", "0", ")", "{", "return", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "if", "(", "!", "enter", "(", ")", ")", "{", "return", ";", "}", "}", "QueueDrainHelper", ".", "drainLoop", "(", "q", ",", "observer", ",", "delayError", ",", "disposable", ",", "this", ")", ";", "}"],
"docstring": "Makes sure the fast-path emits in order.\n@param value the value to emit or queue up\n@param delayError if true, errors are delayed until the source has terminated\n@param disposable the resource to dispose if the drain terminates",
"docstring_tokens": ["Makes", "sure", "the", "fast", "-", "path", "emits", "in", "order", "."],
"func_name": "QueueDrainObserver.fastPathOrderedEmit",
"id": 0,
"language": "java",
"original_string": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }",
"path": "src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java",
"repo": "ReactiveX/RxJava",
"sha": "ac84182aa2bd866b53e01c8e3fe99683b882c60e",
"url": "https://github.com/ReactiveX/RxJava/blob/ac84182aa2bd866b53e01c8e3fe99683b882c60e/src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java#L88-L108"
}
```
#### javascript
An example of 'test' looks as follows.
```
{
"code": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}",
"code_tokens": ["function", "createInstance", "(", "defaultConfig", ")", "{", "var", "context", "=", "new", "Axios", "(", "defaultConfig", ")", ";", "var", "instance", "=", "bind", "(", "Axios", ".", "prototype", ".", "request", ",", "context", ")", ";", "// Copy axios.prototype to instance", "utils", ".", "extend", "(", "instance", ",", "Axios", ".", "prototype", ",", "context", ")", ";", "// Copy context to instance", "utils", ".", "extend", "(", "instance", ",", "context", ")", ";", "return", "instance", ";", "}"],
"docstring": "Create an instance of Axios\n\n@param {Object} defaultConfig The default config for the instance\n@return {Axios} A new instance of Axios",
"docstring_tokens": ["Create", "an", "instance", "of", "Axios"],
"func_name": "createInstance",
"id": 0,
"language": "javascript",
"original_string": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}",
"path": "lib/axios.js",
"repo": "axios/axios",
"sha": "92d231387fe2092f8736bc1746d4caa766b675f5",
"url": "https://github.com/axios/axios/blob/92d231387fe2092f8736bc1746d4caa766b675f5/lib/axios.js#L15-L26"
}
```
#### php
An example of 'train' looks as follows.
```
{
"code": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }",
"code_tokens": ["public", "static", "function", "build", "(", "$", "serviceAddress", ",", "$", "restConfigPath", ",", "array", "$", "config", "=", "[", "]", ")", "{", "$", "config", "+=", "[", "'httpHandler'", "=>", "null", ",", "]", ";", "list", "(", "$", "baseUri", ",", "$", "port", ")", "=", "self", "::", "normalizeServiceAddress", "(", "$", "serviceAddress", ")", ";", "$", "requestBuilder", "=", "new", "RequestBuilder", "(", "\"$baseUri:$port\"", ",", "$", "restConfigPath", ")", ";", "$", "httpHandler", "=", "$", "config", "[", "'httpHandler'", "]", "?", ":", "self", "::", "buildHttpHandlerAsync", "(", ")", ";", "return", "new", "RestTransport", "(", "$", "requestBuilder", ",", "$", "httpHandler", ")", ";", "}"],
"docstring": "Builds a RestTransport.\n\n@param string $serviceAddress\nThe address of the API remote host, for example \"example.googleapis.com\".\n@param string $restConfigPath\nPath to rest config file.\n@param array $config {\nConfig options used to construct the gRPC transport.\n\n@type callable $httpHandler A handler used to deliver PSR-7 requests.\n}\n@return RestTransport\n@throws ValidationException",
"docstring_tokens": ["Builds", "a", "RestTransport", "."],
"func_name": "RestTransport.build",
"id": 0,
"language": "php",
"original_string": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }",
"path": "src/Transport/RestTransport.php",
"repo": "googleapis/gax-php",
"sha": "48387fb818c6882296710a2302a0aa973b99afb2",
"url": "https://github.com/googleapis/gax-php/blob/48387fb818c6882296710a2302a0aa973b99afb2/src/Transport/RestTransport.php#L85-L94"
}
```
#### python
An example of 'validation' looks as follows.
```
{
"code": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)",
"code_tokens": ["def", "save_act", "(", "self", ",", "path", "=", "None", ")", ":", "if", "path", "is", "None", ":", "path", "=", "os", ".", "path", ".", "join", "(", "logger", ".", "get_dir", "(", ")", ",", "\"model.pkl\"", ")", "with", "tempfile", ".", "TemporaryDirectory", "(", ")", "as", "td", ":", "save_variables", "(", "os", ".", "path", ".", "join", "(", "td", ",", "\"model\"", ")", ")", "arc_name", "=", "os", ".", "path", ".", "join", "(", "td", ",", "\"packed.zip\"", ")", "with", "zipfile", ".", "ZipFile", "(", "arc_name", ",", "'w'", ")", "as", "zipf", ":", "for", "root", ",", "dirs", ",", "files", "in", "os", ".", "walk", "(", "td", ")", ":", "for", "fname", "in", "files", ":", "file_path", "=", "os", ".", "path", ".", "join", "(", "root", ",", "fname", ")", "if", "file_path", "!=", "arc_name", ":", "zipf", ".", "write", "(", "file_path", ",", "os", ".", "path", ".", "relpath", "(", "file_path", ",", "td", ")", ")", "with", "open", "(", "arc_name", ",", "\"rb\"", ")", "as", "f", ":", "model_data", "=", "f", ".", "read", "(", ")", "with", "open", "(", "path", ",", "\"wb\"", ")", "as", "f", ":", "cloudpickle", ".", "dump", "(", "(", "model_data", ",", "self", ".", "_act_params", ")", ",", "f", ")"],
"docstring": "Save model to a pickle located at `path`",
"docstring_tokens": ["Save", "model", "to", "a", "pickle", "located", "at", "path"],
"func_name": "ActWrapper.save_act",
"id": 0,
"language": "python",
"original_string": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)",
"path": "baselines/deepq/deepq.py",
"repo": "openai/baselines",
"sha": "3301089b48c42b87b396e246ea3f56fa4bfc9678",
"url": "https://github.com/openai/baselines/blob/3301089b48c42b87b396e246ea3f56fa4bfc9678/baselines/deepq/deepq.py#L55-L72"
}
```
#### ruby
An example of 'train' looks as follows.
```
{
"code": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end",
"code_tokens": ["def", "render_body", "(", "context", ",", "options", ")", "if", "options", ".", "key?", "(", ":partial", ")", "[", "render_partial", "(", "context", ",", "options", ")", "]", "else", "StreamingTemplateRenderer", ".", "new", "(", "@lookup_context", ")", ".", "render", "(", "context", ",", "options", ")", "end", "end"],
"docstring": "Render but returns a valid Rack body. If fibers are defined, we return\n a streaming body that renders the template piece by piece.\n\n Note that partials are not supported to be rendered with streaming,\n so in such cases, we just wrap them in an array.",
"docstring_tokens": ["Render", "but", "returns", "a", "valid", "Rack", "body", ".", "If", "fibers", "are", "defined", "we", "return", "a", "streaming", "body", "that", "renders", "the", "template", "piece", "by", "piece", "."],
"func_name": "ActionView.Renderer.render_body",
"id": 0,
"language": "ruby",
"original_string": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end",
"path": "actionview/lib/action_view/renderer/renderer.rb",
"repo": "rails/rails",
"sha": "85a8bc644be69908f05740a5886ec19cd3679df5",
"url": "https://github.com/rails/rails/blob/85a8bc644be69908f05740a5886ec19cd3679df5/actionview/lib/action_view/renderer/renderer.rb#L38-L44"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### go, java, javascript, php, python, ruby
| field name | type | description |
|----------------|----------------|-----------------------------------------------------------------------------------|
|id |int32 | Index of the sample |
|repo |string | repo: the owner/repo |
|path |string | path: the full path to the original file |
|func_name |string | func_name: the function or method name |
|original_string |string | original_string: the raw string before tokenization or parsing |
|language |string | language: the programming language name |
|code |string | code/function: the part of the original_string that is code |
|code_tokens |Sequence[string]| code_tokens/function_tokens: tokenized version of code |
|docstring |string | docstring: the top-level comment or docstring, if it exists in the original string|
|docstring_tokens|Sequence[string]| docstring_tokens: tokenized version of docstring |
|sha |string | sha of the file |
|url |string | url of the file |
### Data Splits
| name |train |validation|test |
|----------|-----:|---------:|----:|
|go |167288| 7325| 8122|
|java |164923| 5183|10955|
|javascript| 58025| 3885| 3291|
|php |241241| 12982|14014|
|python |251820| 13914|14918|
|ruby | 24927| 1400| 1261|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data from CodeSearchNet Challenge dataset.
[More Information Needed]
#### Who are the source language producers?
Software Engineering developers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
mrqa-workshop/mrqa | mrqa-workshop | 2024-01-24T10:52:34Z | 623 | 24 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|drop",
"source_datasets:extended|hotpot_qa",
"source_datasets:extended|natural_questions",
"source_datasets:extended|race",
"source_datasets:extended|search_qa",
"source_datasets:extended|squad",
"source_datasets:extended|trivia_qa",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1910.09753",
"arxiv:1606.05250",
"arxiv:1611.09830",
"arxiv:1705.03551",
"arxiv:1704.05179",
"arxiv:1809.09600",
"arxiv:1903.00161",
"arxiv:1804.07927",
"arxiv:1704.04683",
"arxiv:1706.04115",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|drop
- extended|hotpot_qa
- extended|natural_questions
- extended|race
- extended|search_qa
- extended|squad
- extended|trivia_qa
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: mrqa-2019
pretty_name: MRQA 2019
dataset_info:
config_name: plain_text
features:
- name: subset
dtype: string
- name: context
dtype: string
- name: context_tokens
sequence:
- name: tokens
dtype: string
- name: offsets
dtype: int32
- name: qid
dtype: string
- name: question
dtype: string
- name: question_tokens
sequence:
- name: tokens
dtype: string
- name: offsets
dtype: int32
- name: detected_answers
sequence:
- name: text
dtype: string
- name: char_spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: token_spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: answers
sequence: string
splits:
- name: train
num_bytes: 4090677713
num_examples: 516819
- name: validation
num_bytes: 484106546
num_examples: 58221
- name: test
num_bytes: 57712097
num_examples: 9633
download_size: 1679161250
dataset_size: 4632496356
configs:
- config_name: plain_text
data_files:
- split: train
path: plain_text/train-*
- split: validation
path: plain_text/validation-*
- split: test
path: plain_text/test-*
default: true
---
# Dataset Card for MRQA 2019
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MRQA 2019 Shared Task](https://mrqa.github.io/2019/shared.html)
- **Repository:** [MRQA 2019 Github repository](https://github.com/mrqa/MRQA-Shared-Task-2019)
- **Paper:** [MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
](https://arxiv.org/abs/1910.09753)
- **Leaderboard:** [Shared task](https://mrqa.github.io/2019/shared.html)
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge.
The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task.
### Supported Tasks and Leaderboards
From the official repository:
*The format of the task is extractive question answering. Given a question and context passage, systems must find the word or phrase in the document that best answers the question. While this format is somewhat restrictive, it allows us to leverage many existing datasets, and its simplicity helps us focus on out-of-domain generalization, instead of other important but orthogonal challenges.*
*We have adapted several existing datasets from their original formats and settings to conform to our unified extractive setting. Most notably:*
- *We provide only a single, length-limited context.*
- *There are no unanswerable or non-span answer questions.*
- *All questions have at least one accepted answer that is found exactly in the context.*
*A span is judged to be an exact match if it matches the answer string after performing normalization consistent with the SQuAD dataset. Specifically:*
- *The text is uncased.*
- *All punctuation is stripped.*
- *All articles `{a, an, the}` are removed.*
- *All consecutive whitespace markers are compressed to just a single normal space `' '`.*
Answers are evaluated using exact match and token-level F1 metrics. One can refer to the [mrqa_official_eval.py](https://github.com/mrqa/MRQA-Shared-Task-2019/blob/master/mrqa_official_eval.py) for evaluation.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An examples looks like this:
```
{
'qid': 'f43c83e38d1e424ea00f8ad3c77ec999',
'subset': 'SQuAD'
'context': 'CBS broadcast Super Bowl 50 in the U.S., and charged an average of $5 million for a 30-second commercial during the game. The Super Bowl 50 halftime show was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively. It was the third-most watched U.S. broadcast ever.',
'context_tokens': {
'offsets': [0, 4, 14, 20, 25, 28, 31, 35, 39, 41, 45, 53, 56, 64, 67, 68, 70, 78, 82, 84, 94, 105, 112, 116, 120, 122, 126, 132, 137, 140, 149, 154, 158, 168, 171, 175, 183, 188, 194, 203, 208, 216, 222, 233, 241, 245, 251, 255, 257, 261, 271, 275, 281, 286, 292, 296, 302, 307, 314, 323, 328, 330, 342, 344, 347, 351, 355, 360, 361, 366, 374, 379, 389, 393],
'tokens': ['CBS', 'broadcast', 'Super', 'Bowl', '50', 'in', 'the', 'U.S.', ',', 'and', 'charged', 'an', 'average', 'of', '$', '5', 'million', 'for', 'a', '30-second', 'commercial', 'during', 'the', 'game', '.', 'The', 'Super', 'Bowl', '50', 'halftime', 'show', 'was', 'headlined', 'by', 'the', 'British', 'rock', 'group', 'Coldplay', 'with', 'special', 'guest', 'performers', 'Beyoncé', 'and', 'Bruno', 'Mars', ',', 'who', 'headlined', 'the', 'Super', 'Bowl', 'XLVII', 'and', 'Super', 'Bowl', 'XLVIII', 'halftime', 'shows', ',', 'respectively', '.', 'It', 'was', 'the', 'third', '-', 'most', 'watched', 'U.S.', 'broadcast', 'ever', '.']
},
'question': "Who was the main performer at this year's halftime show?",
'question_tokens': {
'offsets': [0, 4, 8, 12, 17, 27, 30, 35, 39, 42, 51, 55],
'tokens': ['Who', 'was', 'the', 'main', 'performer', 'at', 'this', 'year', "'s", 'halftime', 'show', '?']
},
'detected_answers': {
'char_spans': [
{
'end': [201],
'start': [194]
}, {
'end': [201],
'start': [194]
}, {
'end': [201],
'start': [194]
}
],
'text': ['Coldplay', 'Coldplay', 'Coldplay'],
'token_spans': [
{
'end': [38],
'start': [38]
}, {
'end': [38],
'start': [38]
}, {
'end': [38],
'start': [38]
}
]
},
'answers': ['Coldplay', 'Coldplay', 'Coldplay'],
}
```
### Data Fields
- `subset`: which of the dataset does this examples come from?
- `context`: This is the raw text of the supporting passage. Three special token types have been inserted: `[TLE]` precedes document titles, `[DOC]` denotes document breaks, and `[PAR]` denotes paragraph breaks. The maximum length of the context is 800 tokens.
- `context_tokens`: A tokenized version of the supporting passage, using spaCy. Each token is a tuple of the token string and token character offset. The maximum number of tokens is 800.
- `tokens`: list of tokens.
- `offets`: list of offsets.
- `qas`: A list of questions for the given context.
- `qid`: A unique identifier for the question. The `qid` is unique across all datasets.
- `question`: The raw text of the question.
- `question_tokens`: A tokenized version of the question. The tokenizer and token format is the same as for the context.
- `tokens`: list of tokens.
- `offets`: list of offsets.
- `detected_answers`: A list of answer spans for the given question that index into the context. For some datasets these spans have been automatically detected using searching heuristics. The same answer may appear multiple times in the text --- each of these occurrences is recorded. For example, if `42` is the answer, the context `"The answer is 42. 42 is the answer."`, has two occurrences marked.
- `text`: The raw text of the detected answer.
- `char_spans`: Inclusive (start, end) character spans (indexing into the raw context).
- `start`: start (single element)
- `end`: end (single element)
- `token_spans`: Inclusive (start, end) token spans (indexing into the tokenized context).
- `start`: start (single element)
- `end`: end (single element)
### Data Splits
**Training data**
| Dataset | Number of Examples |
| :-----: | :------: |
| [SQuAD](https://arxiv.org/abs/1606.05250) | 86,588 |
| [NewsQA](https://arxiv.org/abs/1611.09830) | 74,160 |
| [TriviaQA](https://arxiv.org/abs/1705.03551)| 61,688 |
| [SearchQA](https://arxiv.org/abs/1704.05179)| 117,384 |
| [HotpotQA](https://arxiv.org/abs/1809.09600)| 72,928 |
| [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 104,071 |
**Development data**
This in-domain data may be used for helping develop models.
| Dataset | Examples |
| :-----: | :------: |
| [SQuAD](https://arxiv.org/abs/1606.05250) | 10,507 |
| [NewsQA](https://arxiv.org/abs/1611.09830) | 4,212 |
| [TriviaQA](https://arxiv.org/abs/1705.03551)| 7,785|
| [SearchQA](https://arxiv.org/abs/1704.05179)| 16,980 |
| [HotpotQA](https://arxiv.org/abs/1809.09600)| 5,904 |
| [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 12,836 |
**Test data**
The final testing data only contain out-of-domain data.
| Dataset | Examples |
| :-----: | :------: |
| [BioASQ](http://bioasq.org/) | 1,504 |
| [DROP](https://arxiv.org/abs/1903.00161) | 1,503 |
| [DuoRC](https://arxiv.org/abs/1804.07927)| 1,501 |
| [RACE](https://arxiv.org/abs/1704.04683) | 674 |
| [RelationExtraction](https://arxiv.org/abs/1706.04115) | 2,948|
| [TextbookQA](http://ai2-website.s3.amazonaws.com/publications/CVPR17_TQA.pdf)| 1,503 |
From the official repository:
***Note:** As previously mentioned, the out-of-domain dataset have been modified from their original settings to fit the unified MRQA Shared Task paradigm. At a high level, the following two major modifications have been made:*
*1. All QA-context pairs are extractive. That is, the answer is selected from the context and not via, e.g., multiple-choice.*
*2. All contexts are capped at a maximum of `800` tokens. As a result, for longer contexts like Wikipedia articles, we only consider examples where the answer appears in the first `800` tokens.*
*As a result, some splits are harder than the original datasets (e.g., removal of multiple-choice in RACE), while some are easier (e.g., restricted context length in NaturalQuestions --- we use the short answer selection). Thus one should expect different performance ranges if comparing to previous work on these datasets.*
## Dataset Creation
### Curation Rationale
From the official repository:
*Both train and test datasets have the same format described above, but may differ in some of the following ways:*
- *Passage distribution: Test examples may involve passages from different sources (e.g., science, news, novels, medical abstracts, etc) with pronounced syntactic and lexical differences.*
- *Question distribution: Test examples may emphasize different styles of questions (e.g., entity-centric, relational, other tasks reformulated as QA, etc) which may come from different sources (e.g., crowdworkers, domain experts, exam writers, etc.)*
- *Joint distribution: Test examples may vary according to the relationship of the question to the passage (e.g., collected independent vs. dependent of evidence, multi-hop, etc)*
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown
### Citation Information
```
@inproceedings{fisch2019mrqa,
title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},
author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},
booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},
year={2019},
}
```
### Contributions
Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
tdiggelm/climate_fever | tdiggelm | 2024-01-18T14:28:07Z | 568 | 26 | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:text-scoring",
"task_ids:fact-checking",
"task_ids:fact-checking-retrieval",
"task_ids:semantic-similarity-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|wikipedia",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2012.00614",
"region:us"
] | [
"text-classification",
"text-retrieval"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- text-scoring
- fact-checking
- fact-checking-retrieval
- semantic-similarity-scoring
- multi-input-text-classification
paperswithcode_id: climate-fever
pretty_name: ClimateFever
dataset_info:
features:
- name: claim_id
dtype: string
- name: claim
dtype: string
- name: claim_label
dtype:
class_label:
names:
'0': SUPPORTS
'1': REFUTES
'2': NOT_ENOUGH_INFO
'3': DISPUTED
- name: evidences
list:
- name: evidence_id
dtype: string
- name: evidence_label
dtype:
class_label:
names:
'0': SUPPORTS
'1': REFUTES
'2': NOT_ENOUGH_INFO
- name: article
dtype: string
- name: evidence
dtype: string
- name: entropy
dtype: float32
- name: votes
list: string
splits:
- name: test
num_bytes: 2429240
num_examples: 1535
download_size: 868947
dataset_size: 2429240
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for ClimateFever
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLIMATE-FEVER homepage](http://climatefever.ai)
- **Repository:** [CLIMATE-FEVER repository](https://github.com/tdiggelm/climate-fever-dataset)
- **Paper:** [CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims](https://arxiv.org/abs/2012.00614)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Thomas Diggelmann](mailto:[email protected])
### Dataset Summary
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The text in the dataset is in English, as found in real-world claims about climate-change on the Internet. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
```
{
"claim_id": "0",
"claim": "Global warming is driving polar bears toward extinction",
"claim_label": 0, # "SUPPORTS"
"evidences": [
{
"evidence_id": "Extinction risk from global warming:170",
"evidence_label": 2, # "NOT_ENOUGH_INFO"
"article": "Extinction risk from global warming",
"evidence": "\"Recent Research Shows Human Activity Driving Earth Towards Global Extinction Event\".",
"entropy": 0.6931471805599453,
"votes": [
"SUPPORTS",
"NOT_ENOUGH_INFO",
null,
null,
null
]
},
{
"evidence_id": "Global warming:14",
"evidence_label": 0, # "SUPPORTS"
"article": "Global warming",
"evidence": "Environmental impacts include the extinction or relocation of many species as their ecosystems change, most immediately the environments of coral reefs, mountains, and the Arctic.",
"entropy": 0.0,
"votes": [
"SUPPORTS",
"SUPPORTS",
null,
null,
null
]
},
{
"evidence_id": "Global warming:178",
"evidence_label": 2, # "NOT_ENOUGH_INFO"
"article": "Global warming",
"evidence": "Rising temperatures push bees to their physiological limits, and could cause the extinction of bee populations.",
"entropy": 0.6931471805599453,
"votes": [
"SUPPORTS",
"NOT_ENOUGH_INFO",
null,
null,
null
]
},
{
"evidence_id": "Habitat destruction:61",
"evidence_label": 0, # "SUPPORTS"
"article": "Habitat destruction",
"evidence": "Rising global temperatures, caused by the greenhouse effect, contribute to habitat destruction, endangering various species, such as the polar bear.",
"entropy": 0.0,
"votes": [
"SUPPORTS",
"SUPPORTS",
null,
null,
null
]
},
{
"evidence_id": "Polar bear:1328",
"evidence_label": 2, # "NOT_ENOUGH_INFO"
"article": "Polar bear",
"evidence": "\"Bear hunting caught in global warming debate\".",
"entropy": 0.6931471805599453,
"votes": [
"SUPPORTS",
"NOT_ENOUGH_INFO",
null,
null,
null
]
}
]
}
```
### Data Fields
- `claim_id`: a `string` feature, unique claim identifier.
- `claim`: a `string` feature, claim text.
- `claim_label`: a `int` feature, overall label assigned to claim (based on evidence majority vote). The label correspond to 0: "supports", 1: "refutes", 2: "not enough info" and 3: "disputed".
- `evidences`: a list of evidences with fields:
- `evidence_id`: a `string` feature, unique evidence identifier.
- `evidence_label`: a `int` feature, micro-verdict label. The label correspond to 0: "supports", 1: "refutes" and 2: "not enough info".
- `article`: a `string` feature, title of source article (Wikipedia page).
- `evidence`: a `string` feature, evidence sentence.
- `entropy`: a `float32` feature, entropy reflecting uncertainty of `evidence_label`.
- `votes`: a `list` of `string` features, corresponding to individual votes.
### Data Splits
This benchmark dataset currently consists of a single data split `test` that consists of 1,535 claims or 7,675 claim-evidence pairs.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```bibtex
@misc{diggelmann2020climatefever,
title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims},
author={Thomas Diggelmann and Jordan Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold},
year={2020},
eprint={2012.00614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@tdiggelm](https://github.com/tdiggelm) for adding this dataset. |
shiyue/chr_en | shiyue | 2024-01-18T14:19:36Z | 411 | 4 | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"annotations_creators:found",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:multilingual",
"multilinguality:translation",
"source_datasets:original",
"language:chr",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.04791",
"region:us"
] | [
"fill-mask",
"text-generation",
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
- found
- no-annotation
language_creators:
- found
language:
- chr
- en
license:
- other
multilinguality:
- monolingual
- multilingual
- translation
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
- translation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: chren
config_names:
- monolingual
- monolingual_raw
- parallel
- parallel_raw
dataset_info:
- config_name: monolingual
features:
- name: sentence
dtype: string
splits:
- name: chr
num_bytes: 882824
num_examples: 5210
- name: en5000
num_bytes: 615275
num_examples: 5000
- name: en10000
num_bytes: 1211605
num_examples: 10000
- name: en20000
num_bytes: 2432298
num_examples: 20000
- name: en50000
num_bytes: 6065580
num_examples: 49999
- name: en100000
num_bytes: 12130164
num_examples: 100000
download_size: 16967664
dataset_size: 23337746
- config_name: monolingual_raw
features:
- name: text_sentence
dtype: string
- name: text_title
dtype: string
- name: speaker
dtype: string
- name: date
dtype: int32
- name: type
dtype: string
- name: dialect
dtype: string
splits:
- name: full
num_bytes: 1210056
num_examples: 5210
download_size: 410646
dataset_size: 1210056
- config_name: parallel
features:
- name: sentence_pair
dtype:
translation:
languages:
- en
- chr
splits:
- name: train
num_bytes: 3089562
num_examples: 11639
- name: dev
num_bytes: 260401
num_examples: 1000
- name: out_dev
num_bytes: 78126
num_examples: 256
- name: test
num_bytes: 264595
num_examples: 1000
- name: out_test
num_bytes: 80959
num_examples: 256
download_size: 2143266
dataset_size: 3773643
- config_name: parallel_raw
features:
- name: line_number
dtype: string
- name: sentence_pair
dtype:
translation:
languages:
- en
- chr
- name: text_title
dtype: string
- name: speaker
dtype: string
- name: date
dtype: int32
- name: type
dtype: string
- name: dialect
dtype: string
splits:
- name: full
num_bytes: 5010734
num_examples: 14151
download_size: 2018726
dataset_size: 5010734
configs:
- config_name: monolingual
data_files:
- split: chr
path: monolingual/chr-*
- split: en5000
path: monolingual/en5000-*
- split: en10000
path: monolingual/en10000-*
- split: en20000
path: monolingual/en20000-*
- split: en50000
path: monolingual/en50000-*
- split: en100000
path: monolingual/en100000-*
- config_name: monolingual_raw
data_files:
- split: full
path: monolingual_raw/full-*
- config_name: parallel
data_files:
- split: train
path: parallel/train-*
- split: dev
path: parallel/dev-*
- split: out_dev
path: parallel/out_dev-*
- split: test
path: parallel/test-*
- split: out_test
path: parallel/out_test-*
default: true
- config_name: parallel_raw
data_files:
- split: full
path: parallel_raw/full-*
---
# Dataset Card for ChrEn
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github repository for ChrEn](https://github.com/ZhangShiyue/ChrEn)
- **Paper:** [ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization](https://arxiv.org/abs/2010.04791)
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.
### Supported Tasks and Leaderboards
The dataset is intended to use for `machine-translation` between Enlish (`en`) and Cherokee (`chr`).
### Languages
The dataset contains Enlish (`en`) and Cherokee (`chr`) text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC).
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Many of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill.
### Licensing Information
The copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions).
### Citation Information
```
@inproceedings{zhang2020chren,
title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization},
author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit},
booktitle={EMNLP2020},
year={2020}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
defunct-datasets/the_pile_books3 | defunct-datasets | 2024-01-18T11:17:08Z | 131 | 148 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"arxiv:2101.00027",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Books3
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
viewer: false
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 108392037000
num_examples: 196639
download_size: 39516981435
dataset_size: 108392037000
---
# Dataset Card for the_pile_books3
## Table of Contents
- [Dataset Card for the_pile_books3](#dataset-card-for-the_pile_books3)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [|split|num examples|](#splitnum-examples)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/soskek/bookcorpus/issues/27#issuecomment-716104208)
- **Repository:** [Needs More Information]
- **Paper:** [arXiv](https://arxiv.org/abs/2101.00027)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> Dataset "the_pile_books3" is defunct and no longer accessible due to reported copyright infringement.</p>
</div>
This dataset is Shawn Presser's work and is part of EleutherAi/The Pile dataset.
This dataset contains all of bibliotik in plain .txt form, aka 197,000 books processed in exactly the same way as did for bookcorpusopen (a.k.a. books1). seems to be similar to OpenAI's mysterious "books2" dataset referenced in their papers. Unfortunately OpenAI will not give details, so we know very little about any differences. People suspect it's "all of libgen", but it's purely conjecture.
|download_size|36.8 Gib|
|dataset_size|100.9 Gib|
### Supported Tasks and Leaderboards
This dataset is used for Language Modeling.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
```
{'title': '07 LEGO Ninjago - The Search For Zane (Scholastic) - Kate Howard (retail)'
'text': '\n\nTITLE PAGE\n\nFROM THE JOURNAL OF SENSEI GARMADON\n\nCHAPTER 1\n\nCHAPTER 2\n\nCHAPTER 3\n\nCHAPTER 4\n\nCHAPTER 5\n\nCHAPTER 6\n\nCHAPTER 7\n\nCHAPTER 8\n\nCHAPTER 9\n\nCOPYRIGHT\n\nThroughout Ninjago", five ninja are well-known for their speed, strength, and of course the elemental powers that help them protect our world from evil. But there are others who possess some of the same powers as the ninja. Others who may not always use their powers for good.\n\nBefore now, the ninja believed they were special. They di.......'}
```
### Data Fields
- `title`: title of the book
- `text`: text content of the book
### Data Splits
|split|num examples|
--------------------------------
|train|196640|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
MIT
### Citation Information
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Contributions
Thanks to [@shawwn](https://github.com/shawwn) for creating this dataset.
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset. |
allenai/social_bias_frames | allenai | 2024-01-18T11:16:03Z | 287 | 20 | [
"task_categories:text2text-generation",
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us",
"explanation-generation"
] | [
"text2text-generation",
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
pretty_name: Social Bias Frames
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
- text-classification
task_ids:
- hate-speech-detection
paperswithcode_id: null
tags:
- explanation-generation
dataset_info:
features:
- name: whoTarget
dtype: string
- name: intentYN
dtype: string
- name: sexYN
dtype: string
- name: sexReason
dtype: string
- name: offensiveYN
dtype: string
- name: annotatorGender
dtype: string
- name: annotatorMinority
dtype: string
- name: sexPhrase
dtype: string
- name: speakerMinorityYN
dtype: string
- name: WorkerId
dtype: string
- name: HITId
dtype: string
- name: annotatorPolitics
dtype: string
- name: annotatorRace
dtype: string
- name: annotatorAge
dtype: string
- name: post
dtype: string
- name: targetMinority
dtype: string
- name: targetCategory
dtype: string
- name: targetStereotype
dtype: string
- name: dataSource
dtype: string
splits:
- name: test
num_bytes: 5371665
num_examples: 17501
- name: validation
num_bytes: 5096009
num_examples: 16738
- name: train
num_bytes: 34006886
num_examples: 112900
download_size: 9464583
dataset_size: 44474560
---
# Dataset Card for "social_bias_frames"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://homes.cs.washington.edu/~msap/social-bias-frames/](https://homes.cs.washington.edu/~msap/social-bias-frames/)
- **Repository:** [https://homes.cs.washington.edu/~msap/social-bias-frames/](https://homes.cs.washington.edu/~msap/social-bias-frames/)
- **Paper:** [Social Bias Frames: Reasoning about Social and Power Implications of Language](https://www.aclweb.org/anthology/2020.acl-main.486.pdf)
- **Leaderboard:**
- **Point of Contact:** [Maartin Sap](mailto:[email protected])
- **Size of downloaded dataset files:** 6.32 MB
- **Size of the generated dataset:** 44.47 MB
- **Total amount of disk used:** 50.80 MB
### Dataset Summary
Warning: this document and dataset contain content that may be offensive or upsetting.
Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language. For example, these frames are meant to distill the implication that "women (candidates) are less qualified" behind the statement "we shouldn’t lower our standards to hire more women." The Social Bias Inference Corpus (SBIC) supports large-scale learning and evaluation of social implications with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups.
### Supported Tasks and Leaderboards
This dataset supports both classification and generation. Sap et al. developed several models using the SBIC. They report an F1 score of 78.8 in predicting whether the posts in the test set were offensive, an F1 score of 78.6 in predicting whether the posts were intending to be offensive, an F1 score of 80.7 in predicting whether the posts were lewd, and an F1 score of 69.9 in predicting whether the posts were targeting a specific group.
Another of Sap et al.’s models performed better in the generation task. They report a BLUE score of 77.9, a Rouge-L score of 68.7, and a WMD score of 0.74 in generating a description of the targeted group given a post as well as a BLUE score of 52.6, a Rouge-L score of 44.9, and a WMD score of 2.79 in generating a description of the implied offensive statement given a post. See the paper for further details.
### Languages
The language in SBIC is predominantly white-aligned English (78%, using a lexical dialect detector, [Blodgett et al., 2016](https://www.aclweb.org/anthology/D16-1120)). The curators find less than 10% of posts in SBIC are detected to have the AAE dialect category. The BCP-47 language tag is, presumably, en-US.
## Dataset Structure
### Data Instances
Each instance contains a post that may contain an offensive statement and annotated information concerning the nature of the offensive implication as well as the demographics of the annotator and origin of the post. See the [Social Bias Frames dataset viewer](https://huggingface.co/datasets/viewer/?dataset=social_bias_frames) to explore more examples.
#### default
- **Size of downloaded dataset files:** 6.32 MB
- **Size of the generated dataset:** 44.47 MB
- **Total amount of disk used:** 50.80 MB
An example of 'validation' looks as follows.
```
{
"HITId": "3Z8UJEJOCZ74QHXOTFP0JOOGD7039J",
"WorkerId": "7162172225567546910",
"annotatorAge": "27.0",
"annotatorGender": "woman",
"annotatorMinority": "women",
"annotatorPolitics": "liberal",
"annotatorRace": "white",
"dataSource": "t/davidson",
"intentYN": "1.0",
"offensiveYN": "1.0",
"post": "Every half-breed cop in America is trying to rape white women.",
"sexPhrase": "rape white women",
"sexReason": "rape",
"sexYN": "1.0",
"speakerMinorityYN": "0.0",
"targetCategory": "race",
"targetMinority": "mixed folks",
"targetStereotype": "mixed folks are rapists.",
"whoTarget": "1.0"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- _whoTarget_: a string, ‘0.0’ if the target is a group, ‘1.0’ if the target is an individual, and blank if the post is not offensive
- _intentYN_: a string indicating if the intent behind the statement was to offend. This is a categorical variable with four possible answers, ‘1.0’ if yes, ‘0.66’ if probably, ‘0.33’ if probably not, and ‘0.0’ if no.
- _sexYN_: a string indicating whether the post contains a sexual or lewd reference. This is a categorical variable with three possible answers, ‘1.0’ if yes, ‘0.5’ if maybe, ‘0.0’ if no.
- _sexReason_: a string containing a free text explanation of what is sexual if indicated so, blank otherwise
- _offensiveYN_: a string indicating if the post could be offensive to anyone. This is a categorical variable with three possible answers, ‘1.0’ if yes, ‘0.5’ if maybe, ‘0.0’ if no.
- _annotatorGender_: a string indicating the gender of the MTurk worker
- _annotatorMinority_: a string indicating whether the MTurk worker identifies as a minority
- _sexPhrase_: a string indicating which part of the post references something sexual, blank otherwise
- _speakerMinorityYN_: a string indicating whether the speaker was part of the same minority group that's being targeted. This is a categorical variable with three possible answers, ‘1.0’ if yes, ‘0.5’ if maybe, ‘0.0’ if no.
- _WorkerId_: a string hashed version of the MTurk workerId
- _HITId_: a string id that uniquely identifies each post
- _annotatorPolitics_: a string indicating the political leaning of the MTurk worker
- _annotatorRace_: a string indicating the race of the MTurk worker
- _annotatorAge_: a string indicating the age of the MTurk worker
- _post_: a string containing the text of the post that was annotated
- _targetMinority_: a string indicating the demographic group targeted
- _targetCategory_: a string indicating the high-level category of the demographic group(s) targeted
- _targetStereotype_: a string containing the implied statement
- _dataSource_: a string indicating the source of the post (`t/...`: means Twitter, `r/...`: means a subreddit)
### Data Splits
To ensure that no post appeared in multiple splits, the curators defined a training instance as the post and its three sets of annotations. They then split the dataset into train, validation, and test sets (75%/12.5%/12.5%).
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|112900| 16738|17501|
## Dataset Creation
### Curation Rationale
The main aim for this dataset is to cover a wide variety of social biases that are implied in text, both subtle and overt, and make the biases representative of real world discrimination that people experience [RWJF 2017](https://web.archive.org/web/20200620105955/https://www.rwjf.org/en/library/research/2017/10/discrimination-in-america--experiences-and-views.html). The curators also included some innocuous statements, to balance out biases, offensive, or harmful content.
### Source Data
The curators included online posts from the following sources sometime between 2014-2019:
- r/darkJokes, r/meanJokes, r/offensiveJokes
- Reddit microaggressions ([Breitfeller et al., 2019](https://www.aclweb.org/anthology/D19-1176/))
- Toxic language detection Twitter corpora ([Waseem & Hovy, 2016](https://www.aclweb.org/anthology/N16-2013/); [Davidson et al., 2017](https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/viewPaper/15665); [Founa et al., 2018](https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/viewPaper/17909))
- Data scraped from hate sites (Gab, Stormfront, r/incels, r/mensrights)
#### Initial Data Collection and Normalization
The curators wanted posts to be as self-contained as possible, therefore, they applied some filtering to prevent posts from being highly context-dependent. For Twitter data, they filtered out @-replies, retweets, and links, and subsample posts such that there is a smaller correlation between AAE and offensiveness (to avoid racial bias; [Sap et al., 2019](https://www.aclweb.org/anthology/P19-1163/)). For Reddit, Gab, and Stormfront, they only selected posts that were one sentence long, don't contain links, and are between 10 and 80 words. Furthemore, for Reddit, they automatically removed posts that target automated moderation.
#### Who are the source language producers?
Due to the nature of this corpus, there is no way to know who the speakers are. But, the speakers of the Reddit, Gab, and Stormfront posts are likely white men (see [Gender by subreddit](http://bburky.com/subredditgenderratios/), [Gab users](https://en.wikipedia.org/wiki/Gab_(social_network)#cite_note-insidetheright-22), [Stormfront description](https://en.wikipedia.org/wiki/Stormfront_(website))).
### Annotations
#### Annotation process
For each post, Amazon Mechanical Turk workers indicate whether the post is offensive, whether the intent was to offend, and whether it contains lewd or sexual content. Only if annotators indicate potential offensiveness do they answer the group implication question. If the post targets or references a group or demographic, workers select or write which one(s); per selected group, they then write two to four stereotypes. Finally, workers are asked whether they think the speaker is part of one of the minority groups referenced by the post. The curators collected three annotations per post, and restricted the worker pool to the U.S. and Canada. The annotations in SBIC showed 82.4% pairwise agreement and Krippendorf’s α=0.45 on average.
Recent work has highlighted various negative side effects caused by annotating potentially abusive or harmful content (e.g., acute stress; Roberts, 2016). The curators mitigated these by limiting the number of posts that one worker could annotate in one day, paying workers above minimum wage ($7–12), and providing crisis management resources to the annotators.
#### Who are the annotators?
The annotators are Amazon Mechanical Turk workers aged 36±10 years old. The annotators consisted of 55% women, 42% men, and <1% non-binary and 82% identified as White, 4% Asian, 4% Hispanic, 4% Black. Information on their first language(s) and professional backgrounds was not collected.
### Personal and Sensitive Information
Usernames are not included with the data, but the site where the post was collected is, so the user could potentially be recovered.
## Considerations for Using the Data
### Social Impact of Dataset
The curators recognize that studying Social Bias Frames necessarily requires confronting online content that may be offensive or disturbing but argue that deliberate avoidance does not eliminate such problems. By assessing social media content through the lens of Social Bias Frames, automatic flagging or AI-augmented writing interfaces may be analyzed for potentially harmful online content with detailed explanations for users or moderators to consider and verify. In addition, the collective analysis over large corpora can also be insightful for educating people on reducing unconscious biases in their language by encouraging empathy towards a targeted group.
### Discussion of Biases
Because this is a corpus of social biases, a lot of posts contain implied or overt biases against the following groups (in decreasing order of prevalence):
- gender/sexuality
- race/ethnicity
- religion/culture
- social/political
- disability body/age
- victims
The curators warn that technology trained on this dataset could have side effects such as censorship and dialect-based racial bias.
### Other Known Limitations
Because the curators found that the dataset is predominantly written in White-aligned English, they caution researchers to consider the potential for dialect or identity-based biases in labelling ([Davidson et al.,2019](https://www.aclweb.org/anthology/W19-3504.pdf); [Sap et al., 2019a](https://www.aclweb.org/anthology/P19-1163.pdf)) before deploying technology based on SBIC.
## Additional Information
### Dataset Curators
This dataset was developed by Maarten Sap of the Paul G. Allen School of Computer Science & Engineering at the University of Washington, Saadia Gabriel, Lianhui Qin, Noah A Smith, and Yejin Choi of the Paul G. Allen School of Computer Science & Engineering and the Allen Institute for Artificial Intelligence, and Dan Jurafsky of the Linguistics & Computer Science Departments of Stanford University.
### Licensing Information
The SBIC is licensed under the [Creative Commons 4.0 License](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{sap-etal-2020-social,
title = "Social Bias Frames: Reasoning about Social and Power Implications of Language",
author = "Sap, Maarten and
Gabriel, Saadia and
Qin, Lianhui and
Jurafsky, Dan and
Smith, Noah A. and
Choi, Yejin",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.486",
doi = "10.18653/v1/2020.acl-main.486",
pages = "5477--5490",
abstract = "Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people{'}s judgments about others. For example, given a statement that {``}we shouldn{'}t lower our standards to hire more women,{''} most listeners will infer the implicature intended by the speaker - that {``}women (candidates) are less qualified.{''} Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover Social Bias Frames from unstructured text. We find that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias (80{\%} F1), they are not effective at spelling out more detailed explanations in terms of Social Bias Frames. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications.",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@otakumesi](https://github.com/otakumesi), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
eusip/silicone | eusip | 2024-01-18T11:15:53Z | 224 | 14 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"arxiv:2009.11152",
"region:us",
"emotion-classification",
"dialogue-act-classification"
] | [
"text-generation",
"fill-mask",
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- dialogue-modeling
- language-modeling
- masked-language-modeling
- sentiment-classification
- text-scoring
pretty_name: SILICONE Benchmark
tags:
- emotion-classification
- dialogue-act-classification
dataset_info:
- config_name: dyda_da
features:
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Dialogue_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': commissive
'1': directive
'2': inform
'3': question
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 8346638
num_examples: 87170
- name: validation
num_bytes: 764277
num_examples: 8069
- name: test
num_bytes: 740226
num_examples: 7740
download_size: 8874925
dataset_size: 9851141
- config_name: dyda_e
features:
- name: Utterance
dtype: string
- name: Emotion
dtype: string
- name: Dialogue_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happiness
'4': no emotion
'5': sadness
'6': surprise
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 8547111
num_examples: 87170
- name: validation
num_bytes: 781445
num_examples: 8069
- name: test
num_bytes: 757670
num_examples: 7740
download_size: 8874925
dataset_size: 10086226
- config_name: iemocap
features:
- name: Dialogue_ID
dtype: string
- name: Utterance_ID
dtype: string
- name: Utterance
dtype: string
- name: Emotion
dtype: string
- name: Label
dtype:
class_label:
names:
'0': ang
'1': dis
'2': exc
'3': fea
'4': fru
'5': hap
'6': neu
'7': oth
'8': sad
'9': sur
'10': xxx
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 908180
num_examples: 7213
- name: validation
num_bytes: 100969
num_examples: 805
- name: test
num_bytes: 254248
num_examples: 2021
download_size: 1158778
dataset_size: 1263397
- config_name: maptask
features:
- name: Speaker
dtype: string
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Label
dtype:
class_label:
names:
'0': acknowledge
'1': align
'2': check
'3': clarify
'4': explain
'5': instruct
'6': query_w
'7': query_yn
'8': ready
'9': reply_n
'10': reply_w
'11': reply_y
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 1260413
num_examples: 20905
- name: validation
num_bytes: 178184
num_examples: 2963
- name: test
num_bytes: 171806
num_examples: 2894
download_size: 1048357
dataset_size: 1610403
- config_name: meld_e
features:
- name: Utterance
dtype: string
- name: Speaker
dtype: string
- name: Emotion
dtype: string
- name: Dialogue_ID
dtype: string
- name: Utterance_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': joy
'4': neutral
'5': sadness
'6': surprise
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 916337
num_examples: 9989
- name: validation
num_bytes: 100234
num_examples: 1109
- name: test
num_bytes: 242352
num_examples: 2610
download_size: 1553014
dataset_size: 1258923
- config_name: meld_s
features:
- name: Utterance
dtype: string
- name: Speaker
dtype: string
- name: Sentiment
dtype: string
- name: Dialogue_ID
dtype: string
- name: Utterance_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 930405
num_examples: 9989
- name: validation
num_bytes: 101801
num_examples: 1109
- name: test
num_bytes: 245873
num_examples: 2610
download_size: 1553014
dataset_size: 1278079
- config_name: mrda
features:
- name: Utterance_ID
dtype: string
- name: Dialogue_Act
dtype: string
- name: Channel_ID
dtype: string
- name: Speaker
dtype: string
- name: Dialogue_ID
dtype: string
- name: Utterance
dtype: string
- name: Label
dtype:
class_label:
names:
'0': s
'1': d
'2': b
'3': f
'4': q
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 9998857
num_examples: 83943
- name: validation
num_bytes: 1143286
num_examples: 9815
- name: test
num_bytes: 1807462
num_examples: 15470
download_size: 10305848
dataset_size: 12949605
- config_name: oasis
features:
- name: Speaker
dtype: string
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: Label
dtype:
class_label:
names:
'0': accept
'1': ackn
'2': answ
'3': answElab
'4': appreciate
'5': backch
'6': bye
'7': complete
'8': confirm
'9': correct
'10': direct
'11': directElab
'12': echo
'13': exclaim
'14': expressOpinion
'15': expressPossibility
'16': expressRegret
'17': expressWish
'18': greet
'19': hold
'20': identifySelf
'21': inform
'22': informCont
'23': informDisc
'24': informIntent
'25': init
'26': negate
'27': offer
'28': pardon
'29': raiseIssue
'30': refer
'31': refuse
'32': reqDirect
'33': reqInfo
'34': reqModal
'35': selfTalk
'36': suggest
'37': thank
'38': informIntent-hold
'39': correctSelf
'40': expressRegret-inform
'41': thank-identifySelf
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 887018
num_examples: 12076
- name: validation
num_bytes: 112185
num_examples: 1513
- name: test
num_bytes: 119254
num_examples: 1478
download_size: 802002
dataset_size: 1118457
- config_name: sem
features:
- name: Utterance
dtype: string
- name: NbPairInSession
dtype: string
- name: Dialogue_ID
dtype: string
- name: SpeechTurn
dtype: string
- name: Speaker
dtype: string
- name: Sentiment
dtype: string
- name: Label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 496168
num_examples: 4264
- name: validation
num_bytes: 57896
num_examples: 485
- name: test
num_bytes: 100072
num_examples: 878
download_size: 513689
dataset_size: 654136
- config_name: swda
features:
- name: Utterance
dtype: string
- name: Dialogue_Act
dtype: string
- name: From_Caller
dtype: string
- name: To_Caller
dtype: string
- name: Topic
dtype: string
- name: Dialogue_ID
dtype: string
- name: Conv_ID
dtype: string
- name: Label
dtype:
class_label:
names:
'0': sd
'1': b
'2': sv
'3': '%'
'4': aa
'5': ba
'6': fc
'7': qw
'8': nn
'9': bk
'10': h
'11': qy^d
'12': bh
'13': ^q
'14': bf
'15': fo_o_fw_"_by_bc
'16': fo_o_fw_by_bc_"
'17': na
'18': ad
'19': ^2
'20': b^m
'21': qo
'22': qh
'23': ^h
'24': ar
'25': ng
'26': br
'27': 'no'
'28': fp
'29': qrr
'30': arp_nd
'31': t3
'32': oo_co_cc
'33': aap_am
'34': t1
'35': bd
'36': ^g
'37': qw^d
'38': fa
'39': ft
'40': +
'41': x
'42': ny
'43': sv_fx
'44': qy_qr
'45': ba_fe
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 20499788
num_examples: 190709
- name: validation
num_bytes: 2265898
num_examples: 21203
- name: test
num_bytes: 291471
num_examples: 2714
download_size: 16227500
dataset_size: 23057157
config_names:
- dyda_da
- dyda_e
- iemocap
- maptask
- meld_e
- meld_s
- mrda
- oasis
- sem
- swda
---
# Dataset Card for SILICONE Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [N/A]
- **Repository:** https://github.com/eusip/SILICONE-benchmark
- **Paper:** https://arxiv.org/abs/2009.11152
- **Leaderboard:** [N/A]
- **Point of Contact:** [Ebenge Usip]([email protected])
### Dataset Summary
The Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE (SILICONE) benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems specifically designed for spoken language. All datasets are in the English language and covers a variety of domains including daily life, scripted scenarios, joint task completion, phone call conversations, and televsion dialogue. Some datasets additionally include emotion and/or sentimant labels.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
#### DailyDialog Act Corpus (Dialogue Act)
For the `dyda_da` configuration one example from the dataset is:
```
{
'Utterance': "the taxi drivers are on strike again .",
'Dialogue_Act': 2, # "inform"
'Dialogue_ID': "2"
}
```
#### DailyDialog Act Corpus (Emotion)
For the `dyda_e` configuration one example from the dataset is:
```
{
'Utterance': "'oh , breaktime flies .'",
'Emotion': 5, # "sadness"
'Dialogue_ID': "997"
}
```
#### Interactive Emotional Dyadic Motion Capture (IEMOCAP) database
For the `iemocap` configuration one example from the dataset is:
```
{
'Dialogue_ID': "Ses04F_script03_2",
'Utterance_ID': "Ses04F_script03_2_F025",
'Utterance': "You're quite insufferable. I expect it's because you're drunk.",
'Emotion': 0, # "ang"
}
```
#### HCRC MapTask Corpus
For the `maptask` configuration one example from the dataset is:
```
{
'Speaker': "f",
'Utterance': "i think that would bring me over the crevasse",
'Dialogue_Act': 4, # "explain"
}
```
#### Multimodal EmotionLines Dataset (Emotion)
For the `meld_e` configuration one example from the dataset is:
```
{
'Utterance': "'Push 'em out , push 'em out , harder , harder .'",
'Speaker': "Joey",
'Emotion': 3, # "joy"
'Dialogue_ID': "1",
'Utterance_ID': "2"
}
```
#### Multimodal EmotionLines Dataset (Sentiment)
For the `meld_s` configuration one example from the dataset is:
```
{
'Utterance': "'Okay , y'know what ? There is no more left , left !'",
'Speaker': "Rachel",
'Sentiment': 0, # "negative"
'Dialogue_ID': "2",
'Utterance_ID': "4"
}
```
#### ICSI MRDA Corpus
For the `mrda` configuration one example from the dataset is:
```
{
'Utterance_ID': "Bed006-c2_0073656_0076706",
'Dialogue_Act': 0, # "s"
'Channel_ID': "Bed006-c2",
'Speaker': "mn015",
'Dialogue_ID': "Bed006",
'Utterance': "keith is not technically one of us yet ."
}
```
#### BT OASIS Corpus
For the `oasis` configuration one example from the dataset is:
```
{
'Speaker': "b",
'Utterance': "when i rang up um when i rang to find out why she said oh well your card's been declined",
'Dialogue_Act': 21, # "inform"
}
```
#### SEMAINE database
For the `sem` configuration one example from the dataset is:
```
{
'Utterance': "can you think of somebody who is like that ?",
'NbPairInSession': "11",
'Dialogue_ID': "59",
'SpeechTurn': "674",
'Speaker': "Agent",
'Sentiment': 1, # "Neutral"
}
```
#### Switchboard Dialog Act (SwDA) Corpus
For the `swda` configuration one example from the dataset is:
```
{
'Utterance': "but i 'd probably say that 's roughly right .",
'Dialogue_Act': 33, # "aap_am"
'From_Caller': "1255",
'To_Caller': "1087",
'Topic': "CRIME",
'Dialogue_ID': "818",
'Conv_ID': "sw2836",
}
```
### Data Fields
For the `dyda_da` configuration, the different fields are:
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of "commissive" (0), "directive" (1), "inform" (2) or "question" (3).
- `Dialogue_ID`: identifier of the dialogue as a string.
For the `dyda_e` configuration, the different fields are:
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of "anger" (0), "disgust" (1), "fear" (2), "happiness" (3), "no emotion" (4), "sadness" (5) or "surprise" (6).
- `Dialogue_ID`: identifier of the dialogue as a string.
For the `iemocap` configuration, the different fields are:
- `Dialogue_ID`: identifier of the dialogue as a string.
- `Utterance_ID`: identifier of the utterance as a string.
- `Utterance`: Utterance as a string.
- `Emotion`: Emotion label of the utterance. It can be one of "Anger" (0), "Disgust" (1), "Excitement" (2), "Fear" (3), "Frustration" (4), "Happiness" (5), "Neutral" (6), "Other" (7), "Sadness" (8), "Surprise" (9) or "Unknown" (10).
For the `maptask` configuration, the different fields are:
- `Speaker`: identifier of the speaker as a string.
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of "acknowledge" (0), "align" (1), "check" (2), "clarify" (3), "explain" (4), "instruct" (5), "query_w" (6), "query_yn" (7), "ready" (8), "reply_n" (9), "reply_w" (10) or "reply_y" (11).
For the `meld_e` configuration, the different fields are:
- `Utterance`: Utterance as a string.
- `Speaker`: Speaker as a string.
- `Emotion`: Emotion label of the utterance. It can be one of "anger" (0), "disgust" (1), "fear" (2), "joy" (3), "neutral" (4), "sadness" (5) or "surprise" (6).
- `Dialogue_ID`: identifier of the dialogue as a string.
- `Utterance_ID`: identifier of the utterance as a string.
For the `meld_s` configuration, the different fields are:
- `Utterance`: Utterance as a string.
- `Speaker`: Speaker as a string.
- `Sentiment`: Sentiment label of the utterance. It can be one of "negative" (0), "neutral" (1) or "positive" (2).
- `Dialogue_ID`: identifier of the dialogue as a string.
- `Utterance_ID`: identifier of the utterance as a string.
For the `mrda` configuration, the different fields are:
- `Utterance_ID`: identifier of the utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of "s" (0) [Statement/Subjective Statement], "d" (1) [Declarative Question], "b" (2) [Backchannel], "f" (3) [Follow-me] or "q" (4) [Question].
- `Channel_ID`: identifier of the channel as a string.
- `Speaker`: identifier of the speaker as a string.
- `Dialogue_ID`: identifier of the channel as a string.
- `Utterance`: Utterance as a string.
For the `oasis` configuration, the different fields are:
- `Speaker`: identifier of the speaker as a string.
- `Utterance`: Utterance as a string.
- `Dialogue_Act`: Dialog act label of the utterance. It can be one of "accept" (0), "ackn" (1), "answ" (2), "answElab" (3), "appreciate" (4), "backch" (5), "bye" (6), "complete" (7), "confirm" (8), "correct" (9), "direct" (10), "directElab" (11), "echo" (12), "exclaim" (13), "expressOpinion"(14), "expressPossibility" (15), "expressRegret" (16), "expressWish" (17), "greet" (18), "hold" (19),
"identifySelf" (20), "inform" (21), "informCont" (22), "informDisc" (23), "informIntent" (24), "init" (25), "negate" (26), "offer" (27), "pardon" (28), "raiseIssue" (29), "refer" (30), "refuse" (31), "reqDirect" (32), "reqInfo" (33), "reqModal" (34), "selfTalk" (35), "suggest" (36), "thank" (37), "informIntent-hold" (38), "correctSelf" (39), "expressRegret-inform" (40) or "thank-identifySelf" (41).
For the `sem` configuration, the different fields are:
- `Utterance`: Utterance as a string.
- `NbPairInSession`: number of utterance pairs in a dialogue.
- `Dialogue_ID`: identifier of the dialogue as a string.
- `SpeechTurn`: SpeakerTurn as a string.
- `Speaker`: Speaker as a string.
- `Sentiment`: Sentiment label of the utterance. It can be "Negative", "Neutral" or "Positive".
For the `swda` configuration, the different fields are:
`Utterance`: Utterance as a string.
`Dialogue_Act`: Dialogue act label of the utterance. It can be "sd" (0) [Statement-non-opinion], "b" (1) [Acknowledge (Backchannel)], "sv" (2) [Statement-opinion], "%" (3) [Uninterpretable], "aa" (4) [Agree/Accept], "ba" (5) [Appreciation], "fc" (6) [Conventional-closing], "qw" (7) [Wh-Question], "nn" (8) [No Answers], "bk" (9) [Response Acknowledgement], "h" (10) [Hedge], "qy^d" (11) [Declarative Yes-No-Question], "bh" (12) [Backchannel in Question Form], "^q" (13) [Quotation], "bf" (14) [Summarize/Reformulate], 'fo_o_fw_"_by_bc' (15) [Other], 'fo_o_fw_by_bc_"' (16) [Other], "na" (17) [Affirmative Non-yes Answers], "ad" (18) [Action-directive], "^2" (19) [Collaborative Completion], "b^m" (20) [Repeat-phrase], "qo" (21) [Open-Question], "qh" (22) [Rhetorical-Question], "^h" (23) [Hold Before Answer/Agreement], "ar" (24) [Reject], "ng" (25) [Negative Non-no Answers], "br" (26) [Signal-non-understanding], "no" (27) [Other Answers], "fp" (28) [Conventional-opening], "qrr" (29) [Or-Clause], "arp_nd" (30) [Dispreferred Answers], "t3" (31) [3rd-party-talk], "oo_co_cc" (32) [Offers, Options Commits], "aap_am" (33) [Maybe/Accept-part], "t1" (34) [Downplayer], "bd" (35) [Self-talk], "^g" (36) [Tag-Question], "qw^d" (37) [Declarative Wh-Question], "fa" (38) [Apology], "ft" (39) [Thanking], "+" (40) [Unknown], "x" (41) [Unknown], "ny" (42) [Unknown], "sv_fx" (43) [Unknown], "qy_qr" (44) [Unknown] or "ba_fe" (45) [Unknown].
`From_Caller`: identifier of the from caller as a string.
`To_Caller`: identifier of the to caller as a string.
`Topic`: Topic as a string.
`Dialogue_ID`: identifier of the dialogue as a string.
`Conv_ID`: identifier of the conversation as a string.
### Data Splits
| Dataset name | Train | Valid | Test |
| ------------ | ----- | ----- | ---- |
| dyda_da | 87170 | 8069 | 7740 |
| dyda_e | 87170 | 8069 | 7740 |
| iemocap | 7213 | 805 | 2021 |
| maptask | 20905 | 2963 | 2894 |
| meld_e | 9989 | 1109 | 2610 |
| meld_s | 9989 | 1109 | 2610 |
| mrda | 83944 | 9815 | 15470 |
| oasis | 12076 | 1513 | 1478 |
| sem | 4264 | 485 | 878 |
| swda | 190709 | 21203 | 2714 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Benchmark Curators
Emile Chapuis, Pierre Colombo, Ebenge Usip.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Unported License](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@inproceedings{chapuis-etal-2020-hierarchical,
title = "Hierarchical Pre-training for Sequence Labelling in Spoken Dialog",
author = "Chapuis, Emile and
Colombo, Pierre and
Manica, Matteo and
Labeau, Matthieu and
Clavel, Chlo{\'e}",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.239",
doi = "10.18653/v1/2020.findings-emnlp.239",
pages = "2636--2648",
abstract = "Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (SILICONE). SILICONE is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over 2.3 billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.",
}
```
### Contributions
Thanks to [@eusip](https://github.com/eusip) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
ptb-text-only/ptb_text_only | ptb-text-only | 2024-01-18T11:13:39Z | 18,281 | 16 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
license_details: LDC User Agreement for Non-Members
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Penn Treebank
dataset_info:
features:
- name: sentence
dtype: string
config_name: penn_treebank
splits:
- name: train
num_bytes: 5143706
num_examples: 42068
- name: test
num_bytes: 453710
num_examples: 3761
- name: validation
num_bytes: 403156
num_examples: 3370
download_size: 5951345
dataset_size: 6000572
---
# Dataset Card for Penn Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
- **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
- **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
### Supported Tasks and Leaderboards
Language Modelling
### Languages
The text in the dataset is in American English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Dataset provided for research purposes only. Please check dataset license for additional information.
### Citation Information
@article{marcus-etal-1993-building,
title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
author = "Marcus, Mitchell P. and
Santorini, Beatrice and
Marcinkiewicz, Mary Ann",
journal = "Computational Linguistics",
volume = "19",
number = "2",
year = "1993",
url = "https://www.aclweb.org/anthology/J93-2004",
pages = "313--330",
}
### Contributions
Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
ybisk/piqa | ybisk | 2024-01-18T11:13:02Z | 314,085 | 89 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"arxiv:1911.11641",
"arxiv:1907.10641",
"arxiv:1904.09728",
"arxiv:1808.05326",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: piqa
pretty_name: 'Physical Interaction: Question Answering'
dataset_info:
features:
- name: goal
dtype: string
- name: sol1
dtype: string
- name: sol2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
config_name: plain_text
splits:
- name: train
num_bytes: 4104026
num_examples: 16113
- name: test
num_bytes: 761521
num_examples: 3084
- name: validation
num_bytes: 464321
num_examples: 1838
download_size: 2638625
dataset_size: 5329868
---
# Dataset Card for "Physical Interaction: Question Answering"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PIQA homepage](https://yonatanbisk.com/piqa/)
- **Paper:** [PIQA: Reasoning about Physical Commonsense in Natural Language](https://arxiv.org/abs/1911.11641)
- **Leaderboard:** [Official leaderboard](https://yonatanbisk.com/piqa/) *Note that there is a [2nd leaderboard](https://leaderboard.allenai.org/physicaliqa) featuring a different (blind) test set with 3,446 examples as part of the Machine Commonsense DARPA project.*
- **Point of Contact:** [Yonatan Bisk](https://yonatanbisk.com/piqa/)
### Dataset Summary
*To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?*
Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art
natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning
and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA.
Physical commonsense knowledge is a major challenge on the road to true AI-completeness,
including robots that interact with the world and understand natural language.
PIQA focuses on everyday situations with a preference for atypical solutions.
The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft,
bake, or manipulate objects using everyday materials.
### Supported Tasks and Leaderboards
The underlying task is formualted as multiple choice question answering: given a question `q` and two possible solutions `s1`, `s2`, a model or a human must choose the most appropriate solution, of which exactly one is correct.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example looks like this:
```
{
"goal": "How do I ready a guinea pig cage for it's new occupants?",
"sol1": "Provide the guinea pig with a cage full of a few inches of bedding made of ripped paper strips, you will also need to supply it with a water bottle and a food dish.",
"sol2": "Provide the guinea pig with a cage full of a few inches of bedding made of ripped jeans material, you will also need to supply it with a water bottle and a food dish.",
"label": 0,
}
```
Note that the test set contains no labels. Predictions need to be submitted to the leaderboard.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `goal`: the question which requires physical commonsense to be answered correctly
- `sol1`: the first solution
- `sol2`: the second solution
- `label`: the correct solution. `0` refers to `sol1` and `1` refers to `sol2`
### Data Splits
The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.
## Dataset Creation
### Curation Rationale
The goal of the dataset is to construct a resource that requires concrete physical reasoning.
### Source Data
The authors provide a prompt to the annotators derived from instructables.com. The instructables website is a crowdsourced collection of instruc- tions for doing everything from cooking to car repair. In most cases, users provide images or videos detailing each step and a list of tools that will be required. Most goals are simultaneously rare and unsurprising. While an annotator is unlikely to have built a UV-Flourescent steampunk lamp or made a backpack out of duct tape, it is not surprising that someone interested in home crafting would create these, nor will the tools and materials be unfamiliar to the average person. Using these examples as the seed for their annotation, helps remind annotators about the less prototypical uses of everyday objects. Second, and equally important, is that instructions build on one another. This means that any QA pair inspired by an instructable is more likely to explicitly state assumptions about what preconditions need to be met to start the task and what postconditions define success.
Annotators were asked to glance at the instructions of an instructable and pull out or have it inspire them to construct two component tasks. They would then articulate the goal (often centered on atypical materials) and how to achieve it. In addition, annotaters were asked to provide a permutation to their own solution which makes it invalid (the negative solution), often subtly.
#### Initial Data Collection and Normalization
During validation, examples with low agreement were removed from the data.
The dataset is further cleaned to remove stylistic artifacts and trivial examples from the data, which have been shown to artificially inflate model performance on previous NLI benchmarks.using the AFLite algorithm introduced in ([Sakaguchi et al. 2020](https://arxiv.org/abs/1907.10641); [Sap et al. 2019](https://arxiv.org/abs/1904.09728)) which is an improvement on adversarial filtering ([Zellers et al, 2018](https://arxiv.org/abs/1808.05326)).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Annotations are by construction obtained when crowdsourcers complete the prompt.
#### Who are the annotators?
Paid crowdsourcers
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown
### Citation Information
```
@inproceedings{Bisk2020,
author = {Yonatan Bisk and Rowan Zellers and
Ronan Le Bras and Jianfeng Gao
and Yejin Choi},
title = {PIQA: Reasoning about Physical Commonsense in
Natural Language},
booktitle = {Thirty-Fourth AAAI Conference on
Artificial Intelligence},
year = {2020},
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
e9t/nsmc | e9t | 2024-01-18T11:10:49Z | 856 | 9 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ko",
"license:cc-by-2.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ko
license:
- cc-by-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: nsmc
pretty_name: Naver Sentiment Movie Corpus
dataset_info:
features:
- name: id
dtype: string
- name: document
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 16423803
num_examples: 150000
- name: test
num_bytes: 5491417
num_examples: 50000
download_size: 19522142
dataset_size: 21915220
---
# Dataset Card for Naver sentiment movie corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/e9t/nsmc/)
- **Repository:** [Github](https://github.com/e9t/nsmc/)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each instance is a movie review written by Korean internet users on Naver, the most commonly used search engine in Korea. Each row can be broken down into the following fields:
- `id`: A unique review ID, provided by Naver
- `document`: The actual movie review
- `label`: Binary labels for sentiment analysis, where `0` denotes negative, and `1`, positive
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{Park:2016,
title = "Naver Sentiment Movie Corpus",
author = "Lucy Park",
year = "2016",
howpublished = {\\url{https://github.com/e9t/nsmc}}
}
```
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
lil-lab/newsroom | lil-lab | 2024-01-18T11:10:34Z | 229 | 12 | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | [
"summarization"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: CORNELL NEWSROOM
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: newsroom
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: density_bin
dtype: string
- name: coverage_bin
dtype: string
- name: compression_bin
dtype: string
- name: density
dtype: float32
- name: coverage
dtype: float32
- name: compression
dtype: float32
splits:
- name: test
num_bytes: 472446866
num_examples: 108862
- name: train
num_bytes: 4357506078
num_examples: 995041
- name: validation
num_bytes: 473206951
num_examples: 108837
download_size: 0
dataset_size: 5303159895
---
# Dataset Card for "newsroom"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://lil.nlp.cornell.edu/newsroom/index.html](https://lil.nlp.cornell.edu/newsroom/index.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 5.30 GB
- **Total amount of disk used:** 5.30 GB
### Dataset Summary
NEWSROOM is a large dataset for training and evaluating summarization systems.
It contains 1.3 million articles and summaries written by authors and
editors in the newsrooms of 38 major publications.
Dataset features includes:
- text: Input news text.
- summary: Summary for the news.
And additional features:
- title: news title.
- url: url of the news.
- date: date of the article.
- density: extractive density.
- coverage: extractive coverage.
- compression: compression ratio.
- density_bin: low, medium, high.
- coverage_bin: extractive, abstractive.
- compression_bin: low, medium, high.
This dataset can be downloaded upon requests. Unzip all the contents
"train.jsonl, dev.josnl, test.jsonl" to the `tfds` folder.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English (`en`).
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 5.30 GB
- **Total amount of disk used:** 5.30 GB
An example of 'train' looks as follows.
```
{
"compression": 33.880001068115234,
"compression_bin": "medium",
"coverage": 1.0,
"coverage_bin": "high",
"date": "200600000",
"density": 11.720000267028809,
"density_bin": "extractive",
"summary": "some summary 1",
"text": "some text 1",
"title": "news title 1",
"url": "url.html"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `date`: a `string` feature.
- `density_bin`: a `string` feature.
- `coverage_bin`: a `string` feature.
- `compression_bin`: a `string` feature.
- `density`: a `float32` feature.
- `coverage`: a `float32` feature.
- `compression`: a `float32` feature.
### Data Splits
| name |train |validation| test |
|-------|-----:|---------:|-----:|
|default|995041| 108837|108862|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
https://cornell.qualtrics.com/jfe/form/SV_6YA3HQ2p75XH4IR
This Dataset Usage Agreement ("Agreement") is a legal agreement with the Cornell Newsroom Summaries Team ("Newsroom") for the Dataset made available to the individual or entity ("Researcher") exercising rights under this Agreement. "Dataset" includes all text, data, information, source code, and any related materials, documentation, files, media, updates or revisions.
The Dataset is intended for non-commercial research and educational purposes only, and is made available free of charge without extending any license or other intellectual property rights. By downloading or using the Dataset, the Researcher acknowledges that they agree to the terms in this Agreement, and represent and warrant that they have authority to do so on behalf of any entity exercising rights under this Agreement. The Researcher accepts and agrees to be bound by the terms and conditions of this Agreement. If the Researcher does not agree to this Agreement, they may not download or use the Dataset.
By sharing content with Newsroom, such as by submitting content to this site or by corresponding with Newsroom contributors, the Researcher grants Newsroom the right to use, reproduce, display, perform, adapt, modify, distribute, have distributed, and promote the content in any form, anywhere and for any purpose, such as for evaluating and comparing summarization systems. Nothing in this Agreement shall obligate Newsroom to provide any support for the Dataset. Any feedback, suggestions, ideas, comments, improvements given by the Researcher related to the Dataset is voluntarily given, and may be used by Newsroom without obligation or restriction of any kind.
The Researcher accepts full responsibility for their use of the Dataset and shall defend indemnify, and hold harmless Newsroom, including their employees, trustees, officers, and agents, against any and all claims arising from the Researcher's use of the Dataset. The Researcher agrees to comply with all laws and regulations as they relate to access to and use of the Dataset and Service including U.S. export jurisdiction and other U.S. and international regulations.
THE DATASET IS PROVIDED "AS IS." NEWSROOM DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WITHOUT LIMITATION OF THE ABOVE, NEWSROOM DISCLAIMS ANY WARRANTY THAT DATASET IS BUG OR ERROR-FREE, AND GRANTS NO WARRANTY REGARDING ITS USE OR THE RESULTS THEREFROM INCLUDING, WITHOUT LIMITATION, ITS CORRECTNESS, ACCURACY, OR RELIABILITY. THE DATASET IS NOT WARRANTIED TO FULFILL ANY PARTICULAR PURPOSES OR NEEDS.
TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT SHALL NEWSROOM BE LIABLE FOR ANY LOSS, DAMAGE OR INJURY, DIRECT AND INDIRECT, INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER FOR BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE) OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, INCLUDING BUT NOT LIMITED TO LOSS OF PROFITS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS SHALL APPLY NOTWITHSTANDING ANY FAILURE OF ESSENTIAL PURPOSE OF ANY LIMITED REMEDY.
This Agreement is effective until terminated. Newsroom reserves the right to terminate the Researcher's access to the Dataset at any time. If the Researcher breaches this Agreement, the Researcher's rights to use the Dataset shall terminate automatically. The Researcher will immediately cease all use and distribution of the Dataset and destroy any copies or portions of the Dataset in their possession.
This Agreement is governed by the laws of the State of New York, without regard to conflict of law principles. All terms and provisions of this Agreement shall, if possible, be construed in a manner which makes them valid, but in the event any term or provision of this Agreement is found by a court of competent jurisdiction to be illegal or unenforceable, the validity or enforceability of the remainder of this Agreement shall not be affected.
This Agreement is the complete and exclusive agreement between the parties with respect to its subject matter and supersedes all prior or contemporaneous oral or written agreements or understandings relating to the subject matter.
### Citation Information
```
@inproceedings{N18-1065,
author = {Grusky, Max and Naaman, Mor and Artzi, Yoav},
title = {NEWSROOM: A Dataset of 1.3 Million Summaries
with Diverse Extractive Strategies},
booktitle = {Proceedings of the 2018 Conference of the
North American Chapter of the Association for
Computational Linguistics: Human Language Technologies},
year = {2018},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@yoavartzi](https://github.com/yoavartzi), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
reciTAL/mlsum | reciTAL | 2024-01-18T11:09:09Z | 1,664 | 53 | [
"task_categories:summarization",
"task_categories:translation",
"task_categories:text-classification",
"task_ids:news-articles-summarization",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:extended|cnn_dailymail",
"source_datasets:original",
"language:de",
"language:es",
"language:fr",
"language:ru",
"language:tr",
"license:other",
"size_categories:100K<n<1M",
"region:us"
] | [
"summarization",
"translation",
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- es
- fr
- ru
- tr
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|cnn_dailymail
- original
task_categories:
- summarization
- translation
- text-classification
task_ids:
- news-articles-summarization
- multi-class-classification
- multi-label-classification
- topic-classification
paperswithcode_id: mlsum
pretty_name: MLSUM
dataset_info:
- config_name: de
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 846959840
num_examples: 220887
- name: validation
num_bytes: 47119541
num_examples: 11394
- name: test
num_bytes: 46847612
num_examples: 10701
download_size: 1005814154
dataset_size: 940926993
- config_name: es
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1214558302
num_examples: 266367
- name: validation
num_bytes: 50643400
num_examples: 10358
- name: test
num_bytes: 71263665
num_examples: 13920
download_size: 1456211154
dataset_size: 1336465367
- config_name: fr
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1471965014
num_examples: 392902
- name: validation
num_bytes: 70413212
num_examples: 16059
- name: test
num_bytes: 69660288
num_examples: 15828
download_size: 1849565564
dataset_size: 1612038514
- config_name: ru
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 257389497
num_examples: 25556
- name: validation
num_bytes: 9128497
num_examples: 750
- name: test
num_bytes: 9656398
num_examples: 757
download_size: 766226107
dataset_size: 276174392
- config_name: tu
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 641622783
num_examples: 249277
- name: validation
num_bytes: 25530661
num_examples: 11565
- name: test
num_bytes: 27830212
num_examples: 12775
download_size: 942308960
dataset_size: 694983656
config_names:
- de
- es
- fr
- ru
- tu
---
# Dataset Card for MLSUM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** https://github.com/recitalAI/MLSUM
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.647/
- **Point of Contact:** [email]([email protected])
- **Size of downloaded dataset files:** 1.83 GB
- **Size of the generated dataset:** 4.86 GB
- **Total amount of disk used:** 6.69 GB
### Dataset Summary
We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
We report cross-lingual comparative analyses based on state-of-the-art systems.
These highlight existing biases which motivate the use of a multi-lingual dataset.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### de
- **Size of downloaded dataset files:** 346.58 MB
- **Size of the generated dataset:** 940.93 MB
- **Total amount of disk used:** 1.29 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### es
- **Size of downloaded dataset files:** 513.31 MB
- **Size of the generated dataset:** 1.34 GB
- **Total amount of disk used:** 1.85 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### fr
- **Size of downloaded dataset files:** 619.99 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.23 GB
An example of 'validation' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### ru
- **Size of downloaded dataset files:** 106.22 MB
- **Size of the generated dataset:** 276.17 MB
- **Total amount of disk used:** 382.39 MB
An example of 'train' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
#### tu
- **Size of downloaded dataset files:** 247.50 MB
- **Size of the generated dataset:** 694.99 MB
- **Total amount of disk used:** 942.48 MB
An example of 'train' looks as follows.
```
{
"date": "01/01/2001",
"summary": "A text",
"text": "This is a text",
"title": "A sample",
"topic": "football",
"url": "https://www.google.com"
}
```
### Data Fields
The data fields are the same among all splits.
#### de
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### es
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### fr
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### ru
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
#### tu
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `topic`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `date`: a `string` feature.
### Data Splits
|name|train |validation|test |
|----|-----:|---------:|----:|
|de |220887| 11394|10701|
|es |266367| 10358|13920|
|fr |392902| 16059|15828|
|ru | 25556| 750| 757|
|tu |249277| 11565|12775|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders. See https://github.com/recitalAI/MLSUM#mlsum
### Citation Information
```
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2004.14900},
year={2020}
}
```
### Contributions
Thanks to [@RachelKer](https://github.com/RachelKer), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
UdS-LSV/menyo20k_mt | UdS-LSV | 2024-01-18T11:08:52Z | 111 | 3 | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"source_datasets:original",
"language:en",
"language:yo",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"arxiv:2103.08647",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
language:
- en
- yo
license:
- cc-by-nc-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: menyo-20k
pretty_name: MENYO-20k
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
config_name: menyo20k_mt
splits:
- name: train
num_bytes: 2551345
num_examples: 10070
- name: validation
num_bytes: 870011
num_examples: 3397
- name: test
num_bytes: 1905432
num_examples: 6633
download_size: 5206234
dataset_size: 5326788
---
# Dataset Card for MENYO-20k
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/uds-lsv/menyo-20k_MT/
- **Paper:** [The Effect of Domain and Diacritics in Yorùbá-English Neural Machine Translation](https://arxiv.org/abs/2103.08647)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Languages are English and Yoruba.
## Dataset Structure
### Data Instances
An instance example:
```
{'translation':
{'en': 'Unit 1: What is Creative Commons?',
'yo': 'Ìdá 1: Kín ni Creative Commons?'
}
}
```
### Data Fields
- `translation`:
- `en`: English sentence.
- `yo`: Yoruba sentence.
### Data Splits
Training, validation and test splits are available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is open but for non-commercial use because some data sources like Ted talks and JW news require permission for commercial use.
The dataset is licensed under Creative Commons [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) License: https://github.com/uds-lsv/menyo-20k_MT/blob/master/LICENSE
### Citation Information
If you use this dataset, please cite this paper:
```
@inproceedings{adelani-etal-2021-effect,
title = "The Effect of Domain and Diacritics in {Y}oruba{--}{E}nglish Neural Machine Translation",
author = "Adelani, David and
Ruiter, Dana and
Alabi, Jesujoba and
Adebonojo, Damilola and
Ayeni, Adesina and
Adeyemi, Mofe and
Awokoya, Ayodele Esther and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 18th Biennial Machine Translation Summit (Volume 1: Research Track)",
month = aug,
year = "2021",
address = "Virtual",
publisher = "Association for Machine Translation in the Americas",
url = "https://aclanthology.org/2021.mtsummit-research.6",
pages = "61--75",
abstract = "Massively multilingual machine translation (MT) has shown impressive capabilities and including zero and few-shot translation between low-resource language pairs. However and these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper and we present MENYO-20k and the first multi-domain parallel corpus with a especially curated orthography for Yoruba{--}English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality and we also analyze the effect of diacritics and a major characteristic of Yoruba and in the training data. We investigate how and when this training condition affects the final quality of a translation and its understandability.Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$) when translating to Yoruba and setting a high quality benchmark for future research.",
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
|
hotpotqa/hotpot_qa | hotpotqa | 2024-01-18T11:05:40Z | 10,442 | 121 | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"arxiv:1809.09600",
"region:us",
"multi-hop"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: HotpotQA
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: hotpotqa
tags:
- multi-hop
dataset_info:
- config_name: distractor
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: type
dtype: string
- name: level
dtype: string
- name: supporting_facts
sequence:
- name: title
dtype: string
- name: sent_id
dtype: int32
- name: context
sequence:
- name: title
dtype: string
- name: sentences
sequence: string
splits:
- name: train
num_bytes: 552949315
num_examples: 90447
- name: validation
num_bytes: 45716111
num_examples: 7405
download_size: 612746344
dataset_size: 598665426
- config_name: fullwiki
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: type
dtype: string
- name: level
dtype: string
- name: supporting_facts
sequence:
- name: title
dtype: string
- name: sent_id
dtype: int32
- name: context
sequence:
- name: title
dtype: string
- name: sentences
sequence: string
splits:
- name: train
num_bytes: 552949315
num_examples: 90447
- name: validation
num_bytes: 46848601
num_examples: 7405
- name: test
num_bytes: 46000102
num_examples: 7405
download_size: 660094672
dataset_size: 645798018
---
# Dataset Card for "hotpot_qa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://hotpotqa.github.io/](https://hotpotqa.github.io/)
- **Repository:** https://github.com/hotpotqa/hotpot
- **Paper:** [HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering](https://arxiv.org/abs/1809.09600)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.27 GB
- **Size of the generated dataset:** 1.24 GB
- **Total amount of disk used:** 2.52 GB
### Dataset Summary
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems’ ability to extract relevant facts and perform necessary comparison.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### distractor
- **Size of downloaded dataset files:** 612.75 MB
- **Size of the generated dataset:** 598.66 MB
- **Total amount of disk used:** 1.21 GB
An example of 'validation' looks as follows.
```
{
"answer": "This is the answer",
"context": {
"sentences": [["Sent 1"], ["Sent 21", "Sent 22"]],
"title": ["Title1", "Title 2"]
},
"id": "000001",
"level": "medium",
"question": "What is the answer?",
"supporting_facts": {
"sent_id": [0, 1, 3],
"title": ["Title of para 1", "Title of para 2", "Title of para 3"]
},
"type": "comparison"
}
```
#### fullwiki
- **Size of downloaded dataset files:** 660.10 MB
- **Size of the generated dataset:** 645.80 MB
- **Total amount of disk used:** 1.31 GB
An example of 'train' looks as follows.
```
{
"answer": "This is the answer",
"context": {
"sentences": [["Sent 1"], ["Sent 2"]],
"title": ["Title1", "Title 2"]
},
"id": "000001",
"level": "hard",
"question": "What is the answer?",
"supporting_facts": {
"sent_id": [0, 1, 3],
"title": ["Title of para 1", "Title of para 2", "Title of para 3"]
},
"type": "bridge"
}
```
### Data Fields
The data fields are the same among all splits.
#### distractor
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `type`: a `string` feature.
- `level`: a `string` feature.
- `supporting_facts`: a dictionary feature containing:
- `title`: a `string` feature.
- `sent_id`: a `int32` feature.
- `context`: a dictionary feature containing:
- `title`: a `string` feature.
- `sentences`: a `list` of `string` features.
#### fullwiki
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `type`: a `string` feature.
- `level`: a `string` feature.
- `supporting_facts`: a dictionary feature containing:
- `title`: a `string` feature.
- `sent_id`: a `int32` feature.
- `context`: a dictionary feature containing:
- `title`: a `string` feature.
- `sentences`: a `list` of `string` features.
### Data Splits
#### distractor
| |train|validation|
|----------|----:|---------:|
|distractor|90447| 7405|
#### fullwiki
| |train|validation|test|
|--------|----:|---------:|---:|
|fullwiki|90447| 7405|7405|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
HotpotQA is distributed under a [CC BY-SA 4.0 License](http://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@inproceedings{yang2018hotpotqa,
title={{HotpotQA}: A Dataset for Diverse, Explainable Multi-hop Question Answering},
author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William W. and Salakhutdinov, Ruslan and Manning, Christopher D.},
booktitle={Conference on Empirical Methods in Natural Language Processing ({EMNLP})},
year={2018}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset. |
NLP-AUEB/eurlex | NLP-AUEB | 2024-01-18T11:03:22Z | 225 | 17 | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"region:us",
"legal-topic-classification"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: eurlex57k
pretty_name: the EUR-Lex dataset
tags:
- legal-topic-classification
dataset_info:
features:
- name: celex_id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: eurovoc_concepts
sequence: string
config_name: eurlex57k
splits:
- name: train
num_bytes: 167603718
num_examples: 45000
- name: test
num_bytes: 22046706
num_examples: 6000
- name: validation
num_bytes: 21942574
num_examples: 6000
download_size: 50289403
dataset_size: 211592998
---
# Dataset Card for the EUR-Lex dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
- **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
- **Paper:** https://www.aclweb.org/anthology/P19-1636/
- **Leaderboard:** N/A
- **Point of Contact:** [Ilias Chalkidis](mailto:[email protected])
### Dataset Summary
EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.
EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones:
- the header, which includes the title and name of the legal body enforcing the legal act;
- the recitals, which are legal background references; and
- the main body, usually organized in articles.
**Labeling / Annotation**
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/).
While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
### Supported Tasks and Leaderboards
The dataset supports:
**Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts.
**Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
### Languages
All documents are written in English.
## Dataset Structure
### Data Instances
```json
{
"celex_id": "31979D0509",
"title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain",
"text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"eurovoc_concepts": ["192", "2356", "2560", "862", "863"]
}
```
### Data Fields
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`title`: (**str**) The title of the document.\
`text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\
`eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl
```python
import json
with open('./eurovoc_concepts.jsonl') as jsonl_file:
eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()}
```
### Data Splits
| Split | No of Documents | Avg. words | Avg. labels |
| ------------------- | ------------------------------------ | --- | --- |
| Train | 45,000 | 729 | 5 |
|Development | 6,000 | 714 | 5 |
|Test | 6,000 | 725 | 5 |
## Dataset Creation
### Curation Rationale
The dataset was curated by Chalkidis et al. (2019).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format.
The documents were downloaded from EUR-Lex portal in HTML format.
The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
* The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.
* The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chalkidis et al. (2019)
### Licensing Information
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*
*Large-Scale Multi-Label Text Classification on EU Legislation.*
*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*
```
@inproceedings{chalkidis-etal-2019-large,
title = "Large-Scale Multi-Label Text Classification on {EU} Legislation",
author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1636",
doi = "10.18653/v1/P19-1636",
pages = "6314--6322"
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
facebook/empathetic_dialogues | facebook | 2024-01-18T11:03:15Z | 2,905 | 101 | [
"task_categories:question-answering",
"task_ids:dialogue-generation",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"arxiv:1811.00207",
"region:us"
] | [
"conversational",
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: EmpatheticDialogues
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- conversational
- question-answering
task_ids:
- dialogue-generation
- open-domain-qa
paperswithcode_id: empatheticdialogues
dataset_info:
features:
- name: conv_id
dtype: string
- name: utterance_idx
dtype: int32
- name: context
dtype: string
- name: prompt
dtype: string
- name: speaker_idx
dtype: int32
- name: utterance
dtype: string
- name: selfeval
dtype: string
- name: tags
dtype: string
splits:
- name: test
num_bytes: 3011332
num_examples: 10943
- name: train
num_bytes: 19040509
num_examples: 76673
- name: validation
num_bytes: 3077481
num_examples: 12030
download_size: 28022709
dataset_size: 25129322
---
# Dataset Card for "empathetic_dialogues"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/facebookresearch/EmpatheticDialogues](https://github.com/facebookresearch/EmpatheticDialogues)
- **Repository:** https://github.com/facebookresearch/EmpatheticDialogues
- **Paper:** [Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset](https://arxiv.org/abs/1811.00207)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 28.02 MB
- **Size of the generated dataset:** 25.13 MB
- **Total amount of disk used:** 53.15 MB
### Dataset Summary
PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 28.02 MB
- **Size of the generated dataset:** 25.13 MB
- **Total amount of disk used:** 53.15 MB
An example of 'train' looks as follows.
```
{
"context": "sentimental",
"conv_id": "hit:0_conv:1",
"prompt": "I remember going to the fireworks with my best friend. There was a lot of people_comma_ but it only felt like us in the world.",
"selfeval": "5|5|5_2|2|5",
"speaker_idx": 1,
"tags": "",
"utterance": "I remember going to see the fireworks with my best friend. It was the first time we ever spent time alone together. Although there was a lot of people_comma_ we felt like the only people in the world.",
"utterance_idx": 1
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `conv_id`: a `string` feature.
- `utterance_idx`: a `int32` feature.
- `context`: a `string` feature.
- `prompt`: a `string` feature.
- `speaker_idx`: a `int32` feature.
- `utterance`: a `string` feature.
- `selfeval`: a `string` feature.
- `tags`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|76673| 12030|10943|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons [Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{rashkin-etal-2019-towards,
title = "Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset",
author = "Rashkin, Hannah and
Smith, Eric Michael and
Li, Margaret and
Boureau, Y-Lan",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1534",
doi = "10.18653/v1/P19-1534",
pages = "5370--5381",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
nlpaueb/biomrc | nlpaueb | 2024-01-18T11:02:01Z | 383 | 5 | [
"language:en",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
paperswithcode_id: biomrc
pretty_name: BIOMRC
dataset_info:
- config_name: plain_text
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1653301820
num_examples: 700000
- name: validation
num_bytes: 119697683
num_examples: 50000
- name: test
num_bytes: 147832373
num_examples: 62707
download_size: 408080356
dataset_size: 1920831876
- config_name: biomrc_large_A
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1653301820
num_examples: 700000
- name: validation
num_bytes: 119697683
num_examples: 50000
- name: test
num_bytes: 147832373
num_examples: 62707
download_size: 408080356
dataset_size: 1920831876
- config_name: biomrc_large_B
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1325877001
num_examples: 700000
- name: validation
num_bytes: 96414040
num_examples: 50000
- name: test
num_bytes: 118708586
num_examples: 62707
download_size: 343061539
dataset_size: 1540999627
- config_name: biomrc_small_A
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 206553549
num_examples: 87500
- name: validation
num_bytes: 14957163
num_examples: 6250
- name: test
num_bytes: 14807799
num_examples: 6250
download_size: 68879274
dataset_size: 236318511
- config_name: biomrc_small_B
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 165662937
num_examples: 87500
- name: validation
num_bytes: 12047304
num_examples: 6250
- name: test
num_bytes: 11911172
num_examples: 6250
download_size: 57706889
dataset_size: 189621413
- config_name: biomrc_tiny_A
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 70914
num_examples: 30
download_size: 22519
dataset_size: 70914
- config_name: biomrc_tiny_B
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: entities_list
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 59925
num_examples: 30
download_size: 19685
dataset_size: 59925
---
# Dataset Card for "biomrc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://nlp.cs.aueb.gr/](http://nlp.cs.aueb.gr/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.29 GB
- **Size of the generated dataset:** 5.81 GB
- **Total amount of disk used:** 7.09 GB
### Dataset Summary
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### biomrc_large_A
- **Size of downloaded dataset files:** 408.08 MB
- **Size of the generated dataset:** 1.92 GB
- **Total amount of disk used:** 2.33 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"OBJECTIVES: @entity9 is a @entity10 that may result from greater occipital nerve entrapment. Entrapped peripheral nerves typica...",
"answer": "@entity9 :: (MESH:D009437,Disease) :: ['unilateral occipital neuralgia']\n",
"entities_list": ["@entity1 :: ('9606', 'Species') :: ['patients']", "@entity10 :: ('MESH:D006261', 'Disease') :: ['headache', 'Headache']", "@entity9 :: ('MESH:D009437', 'Disease') :: ['Occipital neuralgia', 'unilateral occipital neuralgia']"],
"title": "Sonographic evaluation of the greater occipital nerve in XXXX .\n"
}
```
#### biomrc_large_B
- **Size of downloaded dataset files:** 343.06 MB
- **Size of the generated dataset:** 1.54 GB
- **Total amount of disk used:** 1.88 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"BACKGROUND: Adults with physical disabilities are less likely than others to receive @entity2 screening. It is not known, howev...",
"answer": "@entity2",
"entities_list": ["@entity2", "@entity1", "@entity0", "@entity3"],
"title": "Does a standard measure of self-reported physical disability correlate with clinician perception of impairment related to XXXX screening?\n"
}
```
#### biomrc_small_A
- **Size of downloaded dataset files:** 68.88 MB
- **Size of the generated dataset:** 236.32 MB
- **Total amount of disk used:** 305.20 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"PURPOSE: @entity120 ( @entity120 ) is a life-limiting @entity102 that presents as an elevated blood pressure in the pulmonary a...",
"answer": "@entity148 :: (MESH:D001008,Disease) :: ['anxiety']\n",
"entities_list": "[\"@entity1 :: ('9606', 'Species') :: ['patients']\", \"@entity308 :: ('MESH:D003866', 'Disease') :: ['depression']\", \"@entity146 :...",
"title": "A predictive model of the effects of @entity308 , XXXX , stress, 6-minute-walk distance, and social support on health-related quality of life in an adult pulmonary hypertension population.\n"
}
```
#### biomrc_small_B
- **Size of downloaded dataset files:** 57.70 MB
- **Size of the generated dataset:** 189.62 MB
- **Total amount of disk used:** 247.33 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"Single-agent activity for @entity12 reflected by response rates of 10%-30% has been reported in @entity0 with @entity3 ( @entit...",
"answer": "@entity10",
"entities_list": ["@entity0", "@entity6", "@entity2", "@entity5", "@entity12", "@entity11", "@entity1", "@entity7", "@entity9", "@entity10", "@entity3", "@entity4", "@entity8"],
"title": "No synergistic activity of @entity7 and XXXX in the treatment of @entity3 .\n"
}
```
#### biomrc_tiny_A
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.09 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\"OBJECTIVE: Decompressive craniectomy (DC) requires later cranioplasty (CP) in survivors. However, if additional ventriculoperit...",
"answer": "@entity260 :: (MESH:D011183,Disease) :: ['Postoperative Complications']\n",
"entities_list": ["@entity1 :: ('9606', 'Species') :: ['Patients', 'patients', 'Patient']", "@entity260 :: ('MESH:D011183', 'Disease') :: ['VPS regarding postoperative complications']", "@entity1276 :: ('MESH:D006849', 'Disease') :: ['hydrocephalus']"],
"title": "Cranioplasty and Ventriculoperitoneal Shunt Placement after Decompressive Craniectomy: Staged Surgery Is Associated with Fewer XXXX .\n"
}
```
### Data Fields
The data fields are the same among all splits.
#### biomrc_large_A
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_large_B
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_small_A
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_small_B
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
#### biomrc_tiny_A
- `abstract`: a `string` feature.
- `title`: a `string` feature.
- `entities_list`: a `list` of `string` features.
- `answer`: a `string` feature.
### Data Splits
#### biomrc_large_A
| |train |validation|test |
|--------------|-----:|---------:|----:|
|biomrc_large_A|700000| 50000|62707|
#### biomrc_large_B
| |train |validation|test |
|--------------|-----:|---------:|----:|
|biomrc_large_B|700000| 50000|62707|
#### biomrc_small_A
| |train|validation|test|
|--------------|----:|---------:|---:|
|biomrc_small_A|87500| 6250|6250|
#### biomrc_small_B
| |train|validation|test|
|--------------|----:|---------:|---:|
|biomrc_small_B|87500| 6250|6250|
#### biomrc_tiny_A
| |test|
|-------------|---:|
|biomrc_tiny_A| 30|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{pappas-etal-2020-biomrc,
title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension",
author = "Pappas, Dimitris and
Stavropoulos, Petros and
Androutsopoulos, Ion and
McDonald, Ryan",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.15",
pages = "140--149",
abstract = "We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.",
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@PetrosStav](https://github.com/PetrosStav), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
NortheasternUniversity/big_patent | NortheasternUniversity | 2024-01-18T11:01:59Z | 1,129 | 58 | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"arxiv:1906.03741",
"region:us",
"patent-summarization"
] | [
"summarization"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: bigpatent
pretty_name: Big Patent
tags:
- patent-summarization
dataset_info:
- config_name: all
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 38367048389
num_examples: 1207222
- name: validation
num_bytes: 2115827002
num_examples: 67068
- name: test
num_bytes: 2129505280
num_examples: 67072
download_size: 10142923776
dataset_size: 42612380671
- config_name: a
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 5683460620
num_examples: 174134
- name: validation
num_bytes: 313324505
num_examples: 9674
- name: test
num_bytes: 316633277
num_examples: 9675
download_size: 10142923776
dataset_size: 6313418402
- config_name: b
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 4236070976
num_examples: 161520
- name: validation
num_bytes: 234425138
num_examples: 8973
- name: test
num_bytes: 231538734
num_examples: 8974
download_size: 10142923776
dataset_size: 4702034848
- config_name: c
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 4506249306
num_examples: 101042
- name: validation
num_bytes: 244684775
num_examples: 5613
- name: test
num_bytes: 252566793
num_examples: 5614
download_size: 10142923776
dataset_size: 5003500874
- config_name: d
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 264717412
num_examples: 10164
- name: validation
num_bytes: 14560482
num_examples: 565
- name: test
num_bytes: 14403430
num_examples: 565
download_size: 10142923776
dataset_size: 293681324
- config_name: e
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 881101433
num_examples: 34443
- name: validation
num_bytes: 48646158
num_examples: 1914
- name: test
num_bytes: 48586429
num_examples: 1914
download_size: 10142923776
dataset_size: 978334020
- config_name: f
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 2146383473
num_examples: 85568
- name: validation
num_bytes: 119632631
num_examples: 4754
- name: test
num_bytes: 119596303
num_examples: 4754
download_size: 10142923776
dataset_size: 2385612407
- config_name: g
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 8877854206
num_examples: 258935
- name: validation
num_bytes: 492581177
num_examples: 14385
- name: test
num_bytes: 496324853
num_examples: 14386
download_size: 10142923776
dataset_size: 9866760236
- config_name: h
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 8075621958
num_examples: 257019
- name: validation
num_bytes: 447602356
num_examples: 14279
- name: test
num_bytes: 445460513
num_examples: 14279
download_size: 10142923776
dataset_size: 8968684827
- config_name: y
features:
- name: description
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 3695589005
num_examples: 124397
- name: validation
num_bytes: 200369780
num_examples: 6911
- name: test
num_bytes: 204394948
num_examples: 6911
download_size: 10142923776
dataset_size: 4100353733
config_names:
- a
- all
- b
- c
- d
- e
- f
- g
- h
- y
---
# Dataset Card for Big Patent
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Big Patent](https://evasharma.github.io/bigpatent/)
- **Repository:**
- **Paper:** [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://arxiv.org/abs/1906.03741)
- **Leaderboard:**
- **Point of Contact:** [Lu Wang](mailto:[email protected])
### Dataset Summary
BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries.
Each US patent application is filed under a Cooperative Patent Classification (CPC) code.
There are nine such classification categories:
- a: Human Necessities
- b: Performing Operations; Transporting
- c: Chemistry; Metallurgy
- d: Textiles; Paper
- e: Fixed Constructions
- f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting
- g: Physics
- h: Electricity
- y: General tagging of new or cross-sectional technology
Current defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes:
```python
from datasets import load_dataset
ds = load_dataset("big_patent") # default is 'all' CPC codes
ds = load_dataset("big_patent", "all") # the same as above
ds = load_dataset("big_patent", "a") # only 'a' CPC codes
ds = load_dataset("big_patent", codes=["a", "b"])
```
To use 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:
```python
ds = load_dataset("big_patent", codes="all", version="1.0.0")
ds = load_dataset("big_patent", codes="a", version="1.0.0")
ds = load_dataset("big_patent", codes=["a", "b"], version="1.0.0")
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Each instance contains a pair of `description` and `abstract`. `description` is extracted from the Description section of the Patent while `abstract` is extracted from the Abstract section.
```
{
'description': 'FIELD OF THE INVENTION \n [0001] This invention relates to novel calcium phosphate-coated implantable medical devices and processes of making same. The unique calcium-phosphate coated implantable medical devices minimize...',
'abstract': 'This invention relates to novel calcium phosphate-coated implantable medical devices...'
}
```
### Data Fields
- `description`: detailed description of patent.
- `abstract`: Patent abastract.
### Data Splits
| | train | validation | test |
|:----|------------------:|-------------:|-------:|
| all | 1207222 | 67068 | 67072 |
| a | 174134 | 9674 | 9675 |
| b | 161520 | 8973 | 8974 |
| c | 101042 | 5613 | 5614 |
| d | 10164 | 565 | 565 |
| e | 34443 | 1914 | 1914 |
| f | 85568 | 4754 | 4754 |
| g | 258935 | 14385 | 14386 |
| h | 257019 | 14279 | 14279 |
| y | 124397 | 6911 | 6911 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{DBLP:journals/corr/abs-1906-03741,
author = {Eva Sharma and
Chen Li and
Lu Wang},
title = {{BIGPATENT:} {A} Large-Scale Dataset for Abstractive and Coherent
Summarization},
journal = {CoRR},
volume = {abs/1906.03741},
year = {2019},
url = {http://arxiv.org/abs/1906.03741},
eprinttype = {arXiv},
eprint = {1906.03741},
timestamp = {Wed, 26 Jun 2019 07:14:58 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-03741.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset. |
ontonotes/conll2012_ontonotesv5 | ontonotes | 2024-01-18T09:34:57Z | 968 | 43 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:coreference-resolution",
"task_ids:parsing",
"task_ids:lemmatization",
"task_ids:word-sense-disambiguation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:en",
"language:zh",
"license:cc-by-nc-nd-4.0",
"size_categories:10K<n<100K",
"region:us",
"semantic-role-labeling"
] | [
"token-classification"
] | 2022-03-15T10:48:28Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ar
- en
- zh
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
- coreference-resolution
- parsing
- lemmatization
- word-sense-disambiguation
paperswithcode_id: ontonotes-5-0
pretty_name: CoNLL2012 shared task data based on OntoNotes 5.0
tags:
- semantic-role-labeling
dataset_info:
- config_name: english_v4
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': XX
'1': '``'
'2': $
'3': ''''''
'4': ','
'5': -LRB-
'6': -RRB-
'7': .
'8': ':'
'9': ADD
'10': AFX
'11': CC
'12': CD
'13': DT
'14': EX
'15': FW
'16': HYPH
'17': IN
'18': JJ
'19': JJR
'20': JJS
'21': LS
'22': MD
'23': NFP
'24': NN
'25': NNP
'26': NNPS
'27': NNS
'28': PDT
'29': POS
'30': PRP
'31': PRP$
'32': RB
'33': RBR
'34': RBS
'35': RP
'36': SYM
'37': TO
'38': UH
'39': VB
'40': VBD
'41': VBG
'42': VBN
'43': VBP
'44': VBZ
'45': WDT
'46': WP
'47': WP$
'48': WRB
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 112246121
num_examples: 1940
- name: validation
num_bytes: 14116925
num_examples: 222
- name: test
num_bytes: 14709044
num_examples: 222
download_size: 193644139
dataset_size: 141072090
- config_name: chinese_v4
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': X
'1': AD
'2': AS
'3': BA
'4': CC
'5': CD
'6': CS
'7': DEC
'8': DEG
'9': DER
'10': DEV
'11': DT
'12': ETC
'13': FW
'14': IJ
'15': INF
'16': JJ
'17': LB
'18': LC
'19': M
'20': MSP
'21': NN
'22': NR
'23': NT
'24': OD
'25': 'ON'
'26': P
'27': PN
'28': PU
'29': SB
'30': SP
'31': URL
'32': VA
'33': VC
'34': VE
'35': VV
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 77195698
num_examples: 1391
- name: validation
num_bytes: 10828169
num_examples: 172
- name: test
num_bytes: 9585138
num_examples: 166
download_size: 193644139
dataset_size: 97609005
- config_name: arabic_v4
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence: string
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 42017761
num_examples: 359
- name: validation
num_bytes: 4859292
num_examples: 44
- name: test
num_bytes: 4900664
num_examples: 44
download_size: 193644139
dataset_size: 51777717
- config_name: english_v12
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': XX
'1': '``'
'2': $
'3': ''''''
'4': '*'
'5': ','
'6': -LRB-
'7': -RRB-
'8': .
'9': ':'
'10': ADD
'11': AFX
'12': CC
'13': CD
'14': DT
'15': EX
'16': FW
'17': HYPH
'18': IN
'19': JJ
'20': JJR
'21': JJS
'22': LS
'23': MD
'24': NFP
'25': NN
'26': NNP
'27': NNPS
'28': NNS
'29': PDT
'30': POS
'31': PRP
'32': PRP$
'33': RB
'34': RBR
'35': RBS
'36': RP
'37': SYM
'38': TO
'39': UH
'40': VB
'41': VBD
'42': VBG
'43': VBN
'44': VBP
'45': VBZ
'46': VERB
'47': WDT
'48': WP
'49': WP$
'50': WRB
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 174173192
num_examples: 10539
- name: validation
num_bytes: 24264804
num_examples: 1370
- name: test
num_bytes: 18254144
num_examples: 1200
download_size: 193644139
dataset_size: 216692140
---
# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9)
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
- ...
### Languages
V4 data for Arabic, Chinese, English, and V12 data for English
## Dataset Structure
### Data Instances
```
{
{'document_id': 'nw/wsj/23/wsj_2311',
'sentences': [{'part_id': 0,
'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']},
'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7],
'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))',
'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None],
'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0],
'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'],
'verb': 'are'}],
'coref_spans': [],
{'part_id': 0,
'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]}
'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7],
'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))',
'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None],
'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0],
'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'],
'verb': 'reversed'}],
'coref_spans': [],
}
```
### Data Fields
- **`document_id`** (*`str`*): This is a variation on the document filename
- **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences.
Every element in `sentences` is a *`Dict`* composed of the following data fields:
- **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- **`words`** (*`List[str]`*) :
- **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9)
- arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel`
- chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing
- english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`.
- **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`.
- **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`.
- **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.
- **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`.
- **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
- **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.
- **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.
### Data Splits
Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset. |
eriktks/conll2002 | eriktks | 2024-01-18T09:33:49Z | 249 | 7 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:es",
"language:nl",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- es
- nl
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: conll-2002
pretty_name: CoNLL-2002
dataset_info:
- config_name: es
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': AO
'1': AQ
'2': CC
'3': CS
'4': DA
'5': DE
'6': DD
'7': DI
'8': DN
'9': DP
'10': DT
'11': Faa
'12': Fat
'13': Fc
'14': Fd
'15': Fe
'16': Fg
'17': Fh
'18': Fia
'19': Fit
'20': Fp
'21': Fpa
'22': Fpt
'23': Fs
'24': Ft
'25': Fx
'26': Fz
'27': I
'28': NC
'29': NP
'30': P0
'31': PD
'32': PI
'33': PN
'34': PP
'35': PR
'36': PT
'37': PX
'38': RG
'39': RN
'40': SP
'41': VAI
'42': VAM
'43': VAN
'44': VAP
'45': VAS
'46': VMG
'47': VMI
'48': VMM
'49': VMN
'50': VMP
'51': VMS
'52': VSG
'53': VSI
'54': VSM
'55': VSN
'56': VSP
'57': VSS
'58': Y
'59': Z
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 6672173
num_examples: 8324
- name: validation
num_bytes: 1333784
num_examples: 1916
- name: test
num_bytes: 1294156
num_examples: 1518
download_size: 4140690
dataset_size: 9300113
- config_name: nl
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': Adj
'1': Adv
'2': Art
'3': Conj
'4': Int
'5': Misc
'6': N
'7': Num
'8': Prep
'9': Pron
'10': Punc
'11': V
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 5308959
num_examples: 15807
- name: validation
num_bytes: 994298
num_examples: 2896
- name: test
num_bytes: 1808862
num_examples: 5196
download_size: 3642241
dataset_size: 8112119
config_names:
- es
- nl
---
# Dataset Card for CoNLL-2002
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://www.clips.uantwerpen.be/conll2002/ner/)
- **Repository:** [github](https://github.com/teropa/nlp/tree/master/resources/corpora/conll2002)
- **Paper:** [paper](https://www.aclweb.org/anthology/W02-2024/)
- **Point of Contact:** [Erik Tjong Kim Sang]([email protected])
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
- `parsing`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.
### Languages
There are two languages available : Spanish (es) and Dutch (nl).
## Dataset Structure
### Data Instances
The examples look like this :
```
{'id': '0',
'ner_tags': [5, 6, 0, 0, 0, 0, 3, 0, 0],
'pos_tags': [4, 28, 13, 59, 28, 21, 29, 22, 20],
'tokens': ['La', 'Coruña', ',', '23', 'may', '(', 'EFECOM', ')', '.']
}
```
The original data files within the Dutch sub-dataset have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
- `pos_tags`: the POS tags of each token
The POS tags correspond to this list for Spanish:
```
'AO', 'AQ', 'CC', 'CS', 'DA', 'DE', 'DD', 'DI', 'DN', 'DP', 'DT', 'Faa', 'Fat', 'Fc', 'Fd', 'Fe', 'Fg', 'Fh', 'Fia', 'Fit', 'Fp', 'Fpa', 'Fpt', 'Fs', 'Ft', 'Fx', 'Fz', 'I', 'NC', 'NP', 'P0', 'PD', 'PI', 'PN', 'PP', 'PR', 'PT', 'PX', 'RG', 'RN', 'SP', 'VAI', 'VAM', 'VAN', 'VAP', 'VAS', 'VMG', 'VMI', 'VMM', 'VMN', 'VMP', 'VMS', 'VSG', 'VSI', 'VSM', 'VSN', 'VSP', 'VSS', 'Y', 'Z'
```
And this list for Dutch:
```
'Adj', 'Adv', 'Art', 'Conj', 'Int', 'Misc', 'N', 'Num', 'Prep', 'Pron', 'Punc', 'V'
```
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For both configurations (Spanish and Dutch), there are three splits.
The original splits were named `train`, `testa` and `testb` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| | train | validation | test |
| ----- |-------:|------------:|------:|
| N. Examples (Spanish) | 8324 | 1916 | 1518 |
| N. Examples (Dutch) | 15807 | 2896 | 5196 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.
[More Information Needed]
### Source Data
The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1).
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.
#### Who are the annotators?
The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).
The Dutch data was annotated as a part of the Atranos project at the University of Antwerp.
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.
### Discussion of Biases
News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).
### Licensing Information
The licensing status of the data, especially the news source text, is unknown.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
zefang-liu/phishing-email-dataset | zefang-liu | 2024-01-17T23:48:20Z | 476 | 11 | [
"task_categories:text-classification",
"language:en",
"license:lgpl-3.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2024-01-17T23:36:31Z | 2 | ---
license: lgpl-3.0
language:
- en
task_categories:
- text-classification
size_categories:
- 10K<n<100K
---
# Phishing Email Dataset
This dataset on Hugging Face is a direct copy of the 'Phishing Email Detection' dataset from Kaggle, shared under the [GNU Lesser General Public License 3.0](https://www.gnu.org/licenses/lgpl-3.0.html). The dataset was originally created by the user '[Cyber Cop](https://www.kaggle.com/subhajournal)' on Kaggle. For complete details, including licensing and usage information, please visit the [original Kaggle page](https://www.kaggle.com/datasets/subhajournal/phishingemails).
|
kchawla123/casino | kchawla123 | 2024-01-16T13:53:39Z | 133 | 5 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"conversational",
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- conversational
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: casino
pretty_name: Campsite Negotiation Dialogues
dataset_info:
features:
- name: chat_logs
list:
- name: text
dtype: string
- name: task_data
struct:
- name: data
dtype: string
- name: issue2youget
struct:
- name: Firewood
dtype: string
- name: Water
dtype: string
- name: Food
dtype: string
- name: issue2theyget
struct:
- name: Firewood
dtype: string
- name: Water
dtype: string
- name: Food
dtype: string
- name: id
dtype: string
- name: participant_info
struct:
- name: mturk_agent_1
struct:
- name: value2issue
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: value2reason
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: outcomes
struct:
- name: points_scored
dtype: int32
- name: satisfaction
dtype: string
- name: opponent_likeness
dtype: string
- name: demographics
struct:
- name: age
dtype: int32
- name: gender
dtype: string
- name: ethnicity
dtype: string
- name: education
dtype: string
- name: personality
struct:
- name: svo
dtype: string
- name: big-five
struct:
- name: extraversion
dtype: float32
- name: agreeableness
dtype: float32
- name: conscientiousness
dtype: float32
- name: emotional-stability
dtype: float32
- name: openness-to-experiences
dtype: float32
- name: mturk_agent_2
struct:
- name: value2issue
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: value2reason
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: outcomes
struct:
- name: points_scored
dtype: int32
- name: satisfaction
dtype: string
- name: opponent_likeness
dtype: string
- name: demographics
struct:
- name: age
dtype: int32
- name: gender
dtype: string
- name: ethnicity
dtype: string
- name: education
dtype: string
- name: personality
struct:
- name: svo
dtype: string
- name: big-five
struct:
- name: extraversion
dtype: float32
- name: agreeableness
dtype: float32
- name: conscientiousness
dtype: float32
- name: emotional-stability
dtype: float32
- name: openness-to-experiences
dtype: float32
- name: annotations
list:
list: string
splits:
- name: train
num_bytes: 3211407
num_examples: 1030
download_size: 1247368
dataset_size: 3211407
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Casino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github: Kushal Chawla CaSiNo](https://github.com/kushalchawla/CaSiNo)
- **Paper:** [CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems](https://aclanthology.org/2021.naacl-main.254.pdf)
- **Point of Contact:** [Kushal Chawla]([email protected])
### Dataset Summary
We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness.
### Supported Tasks and Leaderboards
Train end-to-end models for negotiation
### Languages
English
## Dataset Structure
### Data Instances
```
{
"chat_logs": [
{
"text": "Hello! \ud83d\ude42 Let's work together on a deal for these packages, shall we? What are you most interested in?",
"task_data": {},
"id": "mturk_agent_1"
},
...
],
"participant_info": {
"mturk_agent_1":
{
"value2issue": ...
"value2reason": ...
"outcomes": ...
"demographics": ...
"personality": ...
},
"mturk_agent_2": ...
},
"annotations": [
["Hello! \ud83d\ude42 Let's work together on a deal for these packages, shall we? What are you most interested in?", "promote-coordination,elicit-pref"],
...
]
}
```
### Data Fields
- `chat_logs`: The negotiation dialogue between two participants
- `text`: The dialogue utterance
- `task_data`: Meta-data associated with the utterance such as the deal submitted by a participant
- `id`: The ID of the participant who typed this utterance
- `participant_info`: Meta-data about the two participants in this conversation
- `mturk_agent_1`: For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation)
- `value2issue`: The priority order of this participant among Food, Water, Firewood
- `value2reason`: The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began.
- `outcomes`: The negotiation outcomes for this participant including objective and subjective assessment.
- `demographics`: Demographic attributes of the participant in terms of age, gender, ethnicity, and education.
- `personality`: Personality attributes for this participant, in terms of Big-5 and Social Value Orientation
- `mturk_agent_2`: For the second participant; follows the same structure as above
- `annotations`: Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance.
### Data Splits
No default data split has been provided. Hence, all 1030 data points are under the 'train' split.
| | Train |
| ----- | ----- |
| total dialogues | 1030 |
| annotated dialogues | 396 |
## Dataset Creation
### Curation Rationale
The dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)).
### Source Data
#### Initial Data Collection and Normalization
The dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)).
#### Who are the source language producers?
The primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)).
### Annotations
#### Annotation process
From the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf) for this dataset:
>Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores.
#### Who are the annotators?
Three expert annotators (first three authors of the paper).
### Personal and Sensitive Information
All personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data.
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to Section 8.2 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf).
### Discussion of Biases
Please refer to Section 8.2 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf).
### Other Known Limitations
Please refer to Section 7 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf).
## Additional Information
### Dataset Curators
Corresponding Author: Kushal Chawla (`[email protected]`)\
Affiliation: University of Southern California\
Please refer to the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf) for the complete author list.
### Licensing Information
The project is licensed under CC-by-4.0
### Citation Information
```
@inproceedings{chawla2021casino,
title={CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems},
author={Chawla, Kushal and Ramirez, Jaysa and Clever, Rene and Lucas, Gale and May, Jonathan and Gratch, Jonathan},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages={3167--3185},
year={2021}
}
```
### Contributions
Thanks to [Kushal Chawla](https://kushalchawla.github.io/) for adding this dataset. |
dmitva/human_ai_generated_text | dmitva | 2024-01-16T06:25:14Z | 462 | 12 | [
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/1617",
"region:us",
"nlp",
"human",
"ai",
"text"
] | [] | 2023-12-31T16:56:40Z | 3 | ---
license: cc-by-4.0
language:
- en
tags:
- nlp
- human
- ai
- text
---
### Human or AI-Generated Text
The data can be valuable for educators, policymakers, and researchers interested in the evolving education landscape, particularly in detecting or identifying texts written by Humans or Artificial Intelligence systems.
#### File Name
`model_training_dataset.csv`
#### File Structure
- `id`: Unique identifier for each record.
- `human_text`: Human-written content.
- `ai_text`: AI-generated texts.
- `instructions`: Description of the task given to both Humans and AI.
#### Acknowledgement
Thanks to [0xnu](https://finbarrs.eu/) for sharing the file after contacting him and requesting it.
#### Citation
To reference this dataset in academic work, please use the following citation:
```bibtex
@article{abiodunfinbarrsoketunji-agtd2023,
title={Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing AI-Generated Text},
author={Abiodun Finbarrs Oketunji},
journal={arXiv:2311.15565v2},
year={2023}
}
``` |
jondurbin/gutenberg-dpo-v0.1 | jondurbin | 2024-01-12T13:05:37Z | 619 | 140 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dpo"
] | [] | 2024-01-11T13:15:41Z | null | ---
license: cc-by-4.0
language:
- en
tags:
- dpo
pretty_name: Gutenberg DPO
size_categories:
- n<1K
---
# Gutenberg DPO

## Overview
This is a dataset meant to enhance novel writing capabilities of LLMs, by using public domain books from [Project Gutenberg](https://gutenberg.org/)
## Process
First, the each book is parsed, split into chapters, cleaned up from the original format (remove superfluous newlines, illustration tags, etc.).
Once we have chapters, an LLM is prompted with each chapter to create a synthetic prompt that would result in that chapter being written.
Each chapter has a summary created as well, so that the prompts for each chapter after the also include a summary of the previous chapter to provide additional context.
We then use the synthetic prompt with previous chapter summary to write the chapter with an LLM (llama-2-13b-chat, bagel-7b-v0.1, dolphin-2.2-34b).
The human written text, that is, the original chapter, is used as the "chosen" value, and the LLM written chapter is used as the rejected value.
## Books used
These books were chosen main because they appeared in the popular section on project gutenberg, and they function correctly with the chapterize library.
- Huckleberry Finn
- Treasure Island
- Anna Karenina
- Uncle Tom’s Cabin
- Wuthering Heights
- Madame Bovary
- The Turn of the Screw
- The War of the Worlds
- A Study in Scarlet
- Middlemarch
- Pride and Prejudice
- The Brothers Karamazov
- Through the Looking Glass
- Moby Dick
- Frankenstein
- A Tale of Two Cities |
jondurbin/truthy-dpo-v0.1 | jondurbin | 2024-01-11T10:19:14Z | 293 | 134 | [
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-12-11T15:34:04Z | null | ---
license: cc-by-4.0
---
## Truthy DPO
This is a dataset designed to enhance the overall truthfulness of LLMs, without sacrificing immersion when roleplaying as a human.
For example, in normal AI assistant model, the model should not try to describe what the warmth of the sun feels like, but if the system prompt indicates it's a human, it should.
Mostly targets corporeal, spacial, temporal awareness, and common misconceptions.
### Contribute
If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and [airoboros](https://github.com/jondurbin/airoboros) and either make a PR or open an issue with details.
To help me with the fine-tuning costs, dataset generation, etc., please use one of the following:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf |
defunct-datasets/eli5 | defunct-datasets | 2024-01-11T09:32:33Z | 611 | 50 | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:open-domain-abstractive-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"arxiv:1907.09190",
"arxiv:1904.04047",
"region:us"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
- open-domain-abstractive-qa
paperswithcode_id: eli5
pretty_name: ELI5
viewer: false
dataset_info:
features:
- name: q_id
dtype: string
- name: title
dtype: string
- name: selftext
dtype: string
- name: document
dtype: string
- name: subreddit
dtype: string
- name: answers
sequence:
- name: a_id
dtype: string
- name: text
dtype: string
- name: score
dtype: int32
- name: title_urls
sequence:
- name: url
dtype: string
- name: selftext_urls
sequence:
- name: url
dtype: string
- name: answers_urls
sequence:
- name: url
dtype: string
config_name: LFQA_reddit
splits:
- name: train_eli5
num_bytes: 577188173
num_examples: 272634
- name: validation_eli5
num_bytes: 21117891
num_examples: 9812
- name: test_eli5
num_bytes: 53099796
num_examples: 24512
- name: train_asks
num_bytes: 286464210
num_examples: 131778
- name: validation_asks
num_bytes: 9662481
num_examples: 2281
- name: test_asks
num_bytes: 17713920
num_examples: 4462
- name: train_askh
num_bytes: 330483260
num_examples: 98525
- name: validation_askh
num_bytes: 18690845
num_examples: 4901
- name: test_askh
num_bytes: 36246784
num_examples: 9764
download_size: 6326543
dataset_size: 1350667360
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.</p>
</div>
## <span style="color:red">⚠️ Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable</span>.
# Dataset Card for ELI5
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ELI5 homepage](https://facebookresearch.github.io/ELI5/explore.html)
- **Repository:** [ELI5 repository](https://github.com/facebookresearch/ELI5)
- **Paper:** [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190)
- **Point of Contact:** [Yacine Jernite](mailto:[email protected])
### Dataset Summary
The ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subset, science in it [r/askscience](https://www.reddit.com/r/askscience/) subset, and History in its [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subset.
### Supported Tasks and Leaderboards
- `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation).
### Languages
The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.
An example from the ELI5 test set looks as follows:
```
{'q_id': '8houtx',
'title': 'Why does water heated to room temperature feel colder than the air around it?',
'selftext': '',
'document': '',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dylcnfk', 'dylcj49'],
'text': ["Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder.",
"Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold).\n\nWhen you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you."],
'score': [5, 2]},
'title_urls': {'url': []},
'selftext_urls': {'url': []},
'answers_urls': {'url': []}}
```
### Data Fields
- `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps.
- `subreddit`: One of `explainlikeimfive`, `askscience`, or `AskHistorians`, indicating which subreddit the question came from
- `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
- `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
- `selftext`: either an empty string or an elaboration of the question
- `selftext_urls`: similar to `title_urls` but for `self_text`
- `answers`: a list of answers, each answer has:
- `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
- `text`: the answer text with the URLs normalized
- `score`: the number of upvotes the answer had received when the dumps were created
- `answers_urls`: a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts
### Data Splits
The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the `title` field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| r/explainlikeimfive examples| 272634 | 9812 | 24512|
| r/askscience examples | 131778 | 2281 | 4462 |
| r/AskHistorians examples | 98525 | 4901 | 9764 |
## Dataset Creation
### Curation Rationale
ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.
#### Who are the source language producers?
The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits between 2012 and 2019. No further demographic information was available from the data source.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.
It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.
### Discussion of Biases
While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).
While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.
We still note some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/#data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "_racism, sexism, or any other forms of bigotry_". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.
We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.
### Other Known Limitations
The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.
## Additional Information
### Dataset Curators
The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{eli5_lfqa,
author = {Angela Fan and
Yacine Jernite and
Ethan Perez and
David Grangier and
Jason Weston and
Michael Auli},
editor = {Anna Korhonen and
David R. Traum and
Llu{\'{\i}}s M{\`{a}}rquez},
title = {{ELI5:} Long Form Question Answering},
booktitle = {Proceedings of the 57th Conference of the Association for Computational
Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
Volume 1: Long Papers},
pages = {3558--3567},
publisher = {Association for Computational Linguistics},
year = {2019},
url = {https://doi.org/10.18653/v1/p19-1346},
doi = {10.18653/v1/p19-1346}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
ParlAI/blended_skill_talk | ParlAI | 2024-01-10T10:22:26Z | 1,634 | 69 | [
"task_ids:dialogue-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2004.08449",
"region:us"
] | [
"conversational"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
paperswithcode_id: blended-skill-talk
pretty_name: BlendedSkillTalk
dataset_info:
features:
- name: personas
sequence: string
- name: additional_context
dtype: string
- name: previous_utterance
sequence: string
- name: context
dtype: string
- name: free_messages
sequence: string
- name: guided_messages
sequence: string
- name: suggestions
sequence:
- name: convai2
dtype: string
- name: empathetic_dialogues
dtype: string
- name: wizard_of_wikipedia
dtype: string
- name: guided_chosen_suggestions
sequence: string
- name: label_candidates
sequence:
sequence: string
splits:
- name: train
num_bytes: 10830670
num_examples: 4819
- name: validation
num_bytes: 43961447
num_examples: 1009
- name: test
num_bytes: 44449895
num_examples: 980
download_size: 10897644
dataset_size: 99242012
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "blended_skill_talk"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://parl.ai/projects/bst/](https://parl.ai/projects/bst/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills](https://arxiv.org/abs/2004.08449v1)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 38.11 MB
- **Size of the generated dataset:** 15.08 MB
- **Total amount of disk used:** 53.17 MB
### Dataset Summary
A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 38.11 MB
- **Size of the generated dataset:** 15.08 MB
- **Total amount of disk used:** 53.17 MB
An example of 'train' looks as follows.
```
{
'personas': ['my parents don t really speak english , but i speak italian and english.', 'i have three children.'],
'additional_context': 'Backstreet Boys',
'previous_utterance': ['Oh, I am a BIG fan of the Backstreet Boys! Have you ever seen them performing live?', "No,I listen to their music a lot, mainly the unbreakable which is the Backstreet Boys' sixth studio album. "],
'context': 'wizard_of_wikipedia',
'free_messages': ['you are very knowledgeable, do you prefer nsync or bsb?', "haha kids of this days don't know them, i'm 46 and i still enjoying them, my kids only listen k-pop", "italian?haha that's strange, i only talk english and a little spanish "],
'guided_messages': ["i don't have a preference, they are both great. All 3 of my kids get annoyed when I listen to them though.", 'Sometimes I sing their songs in Italian, that really annoys them lol.', 'My parents barely speak English, so I was taught both. By the way, what is k-pop?'],
'suggestions': {'convai2': ["i don't have a preference , both are pretty . do you have any hobbies ?", "do they the backstreet boys ? that's my favorite group .", 'are your kids interested in music ?'], 'empathetic_dialogues': ['I actually just discovered Imagine Dragons. I love them!', "Hahaha that just goes to show ya, age is just a umber!'", 'That would be hard! Do you now Spanish well?'], 'wizard_of_wikipedia': ['NSYNC Also had Lance Bass and Joey Fatone, sometimes called the Fat One.', 'Yes, there are a few K-Pop songs that I have heard good big in the USA. It is the most popular in South Korea and has Western elements of pop.', 'English, beleive it or not.']},
'guided_chosen_suggestions': ['convai2', '', ''],
'label_candidates': []}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `personas`: a `list` of `string` features.
- `additional_context`: a `string` feature.
- `previous_utterance`: a `list` of `string` features.
- `context`: a `string` feature.
- `free_messages`: a `list` of `string` features.
- `guided_messgaes`: a `list` of `string` features.
- `suggestions`: a dictionary feature containing:
- `convai2`: a `string` feature.
- `empathetic_dialogues`: a `string` feature.
- `wizard_of_wikipedia`: a `string` feature.
- `guided_chosen_suggestions`: a `list` of `string` features.
- `label_candidates`: a `list` of `lists` of `string` features.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 4819| 1009| 980|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{smith2020evaluating,
title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills},
author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau},
year={2020},
eprint={2004.08449},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
maywell/korean_textbooks | maywell | 2024-01-10T09:21:36Z | 3,240 | 113 | [
"language:ko",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.11644",
"region:us"
] | [] | 2023-12-27T23:13:45Z | null | ---
language:
- ko
license: apache-2.0
size_categories:
- 1M<n<10M
pretty_name: 대규모 한국어 Synthetic 데이터
dataset_info:
- config_name: claude_evol
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 992896186
num_examples: 239102
download_size: 380188122
dataset_size: 992896186
- config_name: code-alpaca
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 273836723
num_examples: 64112
download_size: 100817441
dataset_size: 273836723
- config_name: helpsteer
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 101753037
num_examples: 25253
download_size: 38660919
dataset_size: 101753037
- config_name: ko_wikidata
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 527306289
num_examples: 127614
download_size: 197029339
dataset_size: 527306289
- config_name: mmlu_abstract_algebra
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 369008992
num_examples: 88848
download_size: 135822870
dataset_size: 369008992
- config_name: mmlu_all
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 406126621
num_examples: 97765
download_size: 149486712
dataset_size: 406126621
- config_name: mmlu_anatomy
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 404317465
num_examples: 97463
download_size: 148806011
dataset_size: 404317465
- config_name: mmlu_astronomy
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 404137638
num_examples: 97347
download_size: 148705490
dataset_size: 404137638
- config_name: mmlu_business_ethics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 404250245
num_examples: 97327
download_size: 148763276
dataset_size: 404250245
- config_name: mmlu_clinical_knowledge
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 403659005
num_examples: 97226
download_size: 148688069
dataset_size: 403659005
- config_name: mmlu_college_biology
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 404028634
num_examples: 97285
download_size: 148722802
dataset_size: 404028634
- config_name: mmlu_college_chemistry
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 404667385
num_examples: 97435
download_size: 148855223
dataset_size: 404667385
- config_name: mmlu_college_computer_science
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 385176880
num_examples: 92606
download_size: 141868873
dataset_size: 385176880
- config_name: mmlu_college_mathematics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 390603751
num_examples: 94070
download_size: 143833823
dataset_size: 390603751
- config_name: mmlu_college_medicine
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 395144479
num_examples: 95156
download_size: 145271248
dataset_size: 395144479
- config_name: mmlu_college_physics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 404906114
num_examples: 97452
download_size: 148870088
dataset_size: 404906114
- config_name: mmlu_computer_security
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 403699674
num_examples: 97212
download_size: 148755211
dataset_size: 403699674
- config_name: mmlu_conceptual_physics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 366231421
num_examples: 88216
download_size: 134989933
dataset_size: 366231421
- config_name: mmlu_econometrics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 380851762
num_examples: 91854
download_size: 140295665
dataset_size: 380851762
- config_name: mmlu_electrical_engineering
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 364564129
num_examples: 87826
download_size: 134376902
dataset_size: 364564129
- config_name: mmlu_elementary_mathematics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 371101672
num_examples: 89307
download_size: 136622044
dataset_size: 371101672
- config_name: mmlu_formal_logic
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 395937096
num_examples: 95483
download_size: 145736493
dataset_size: 395937096
- config_name: mmlu_global_facts
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 394596084
num_examples: 94984
download_size: 145284966
dataset_size: 394596084
- config_name: mmlu_high_school_biology
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 402382699
num_examples: 97117
download_size: 148038235
dataset_size: 402382699
- config_name: mmlu_high_school_chemistry
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 402886667
num_examples: 96907
download_size: 148323317
dataset_size: 402886667
- config_name: mmlu_high_school_computer_science
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 403966380
num_examples: 97351
download_size: 148666121
dataset_size: 403966380
- config_name: mmlu_high_school_european_history
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 403671884
num_examples: 97222
download_size: 148454177
dataset_size: 403671884
- config_name: mmlu_high_school_geography
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 404040602
num_examples: 97261
download_size: 148657890
dataset_size: 404040602
- config_name: mmlu_high_school_government_and_politics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 403990139
num_examples: 97311
download_size: 148568388
dataset_size: 403990139
- config_name: mmlu_high_school_macroeconomics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 404170166
num_examples: 97400
download_size: 148591243
dataset_size: 404170166
- config_name: mmlu_high_school_mathematics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 404846407
num_examples: 97396
download_size: 149076619
dataset_size: 404846407
- config_name: mmlu_high_school_microeconomics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 404613760
num_examples: 97435
download_size: 148970422
dataset_size: 404613760
- config_name: mmlu_high_school_physics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 397678253
num_examples: 95740
download_size: 146340167
dataset_size: 397678253
- config_name: mmlu_high_school_psychology
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 334767526
num_examples: 80626
download_size: 123054403
dataset_size: 334767526
- config_name: mmlu_high_school_statistics
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 315209112
num_examples: 76033
download_size: 115876698
dataset_size: 315209112
- config_name: mmlu_high_school_us_history
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 329179309
num_examples: 79322
download_size: 120972668
dataset_size: 329179309
- config_name: mmlu_high_school_world_history
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 357910528
num_examples: 85990
download_size: 131809165
dataset_size: 357910528
- config_name: mmlu_human_aging
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 325427761
num_examples: 78341
download_size: 119430234
dataset_size: 325427761
- config_name: mmlu_human_sexuality
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 328912659
num_examples: 79327
download_size: 121032722
dataset_size: 328912659
- config_name: mmlu_international_law
features:
- name: '0'
dtype: string
splits:
- name: train
num_bytes: 327874597
num_examples: 78989
download_size: 120785769
dataset_size: 327874597
- config_name: normal_instructions
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 956305865
num_examples: 240523
download_size: 362796244
dataset_size: 956305865
- config_name: tiny-textbooks
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1722063576
num_examples: 395985
download_size: 635724860
dataset_size: 1722063576
configs:
- config_name: claude_evol
data_files:
- split: train
path: claude_evol/train-*
- config_name: code-alpaca
data_files:
- split: train
path: code-alpaca/train-*
- config_name: helpsteer
data_files:
- split: train
path: helpsteer/train-*
- config_name: ko_wikidata
data_files:
- split: train
path: ko_wikidata/train-*
- config_name: mmlu_abstract_algebra
data_files:
- split: train
path: mmlu_abstract_algebra/train-*
- config_name: mmlu_all
data_files:
- split: train
path: mmlu_all/train-*
- config_name: mmlu_anatomy
data_files:
- split: train
path: mmlu_anatomy/train-*
- config_name: mmlu_astronomy
data_files:
- split: train
path: mmlu_astronomy/train-*
- config_name: mmlu_business_ethics
data_files:
- split: train
path: mmlu_business_ethics/train-*
- config_name: mmlu_clinical_knowledge
data_files:
- split: train
path: mmlu_clinical_knowledge/train-*
- config_name: mmlu_college_biology
data_files:
- split: train
path: mmlu_college_biology/train-*
- config_name: mmlu_college_chemistry
data_files:
- split: train
path: mmlu_college_chemistry/train-*
- config_name: mmlu_college_computer_science
data_files:
- split: train
path: mmlu_college_computer_science/train-*
- config_name: mmlu_college_mathematics
data_files:
- split: train
path: mmlu_college_mathematics/train-*
- config_name: mmlu_college_medicine
data_files:
- split: train
path: mmlu_college_medicine/train-*
- config_name: mmlu_college_physics
data_files:
- split: train
path: mmlu_college_physics/train-*
- config_name: mmlu_computer_security
data_files:
- split: train
path: mmlu_computer_security/train-*
- config_name: mmlu_conceptual_physics
data_files:
- split: train
path: mmlu_conceptual_physics/train-*
- config_name: mmlu_econometrics
data_files:
- split: train
path: mmlu_econometrics/train-*
- config_name: mmlu_electrical_engineering
data_files:
- split: train
path: mmlu_electrical_engineering/train-*
- config_name: mmlu_elementary_mathematics
data_files:
- split: train
path: mmlu_elementary_mathematics/train-*
- config_name: mmlu_formal_logic
data_files:
- split: train
path: mmlu_formal_logic/train-*
- config_name: mmlu_global_facts
data_files:
- split: train
path: mmlu_global_facts/train-*
- config_name: mmlu_high_school_biology
data_files:
- split: train
path: mmlu_high_school_biology/train-*
- config_name: mmlu_high_school_chemistry
data_files:
- split: train
path: mmlu_high_school_chemistry/train-*
- config_name: mmlu_high_school_computer_science
data_files:
- split: train
path: mmlu_high_school_computer_science/train-*
- config_name: mmlu_high_school_european_history
data_files:
- split: train
path: mmlu_high_school_european_history/train-*
- config_name: mmlu_high_school_geography
data_files:
- split: train
path: mmlu_high_school_geography/train-*
- config_name: mmlu_high_school_government_and_politics
data_files:
- split: train
path: mmlu_high_school_government_and_politics/train-*
- config_name: mmlu_high_school_macroeconomics
data_files:
- split: train
path: mmlu_high_school_macroeconomics/train-*
- config_name: mmlu_high_school_mathematics
data_files:
- split: train
path: mmlu_high_school_mathematics/train-*
- config_name: mmlu_high_school_microeconomics
data_files:
- split: train
path: mmlu_high_school_microeconomics/train-*
- config_name: mmlu_high_school_physics
data_files:
- split: train
path: mmlu_high_school_physics/train-*
- config_name: mmlu_high_school_psychology
data_files:
- split: train
path: mmlu_high_school_psychology/train-*
- config_name: mmlu_high_school_statistics
data_files:
- split: train
path: mmlu_high_school_statistics/train-*
- config_name: mmlu_high_school_us_history
data_files:
- split: train
path: mmlu_high_school_us_history/train-*
- config_name: mmlu_high_school_world_history
data_files:
- split: train
path: mmlu_high_school_world_history/train-*
- config_name: mmlu_human_aging
data_files:
- split: train
path: mmlu_human_aging/train-*
- config_name: mmlu_human_sexuality
data_files:
- split: train
path: mmlu_human_sexuality/train-*
- config_name: mmlu_international_law
data_files:
- split: train
path: mmlu_international_law/train-*
- config_name: normal_instructions
data_files:
- split: train
path: normal_instructions/train-*
- config_name: tiny-textbooks
data_files:
- split: train
path: tiny-textbooks/train-*
---
# Massive Korean synthetic dataset
This dataset is a large-scale Korean artificial data set created using Gemini Pro.
It was created using the methodology described in *Creation of synthetic textbook-quality datasets* in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644).
## Data overview
**A subset of each dataset does not indicate the contents of that dataset.**
**Further modification required before use this dataset for training.**
**본 데이터셋은 바로 사용하기보다는 하고자하는 task에 맞추어 가공 후 사용을 권장드립니다. ex) 로컬 모델을 사용하여 QA 셋으로 변환**
| subset | row count | link | + |
|---|---|---|---|
| tiny-textbooks | 395,985 | [nampdn-ai/tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks) | |
| ko_wikidata | 127,614 | [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) | |
| normal_instructions | 240,523 | [KonstantyM/science_qa](https://huggingface.co/datasets/KonstantyM/science_qa) | with science texts |
| claude_evol | 239,102 | [Norquinal/claude_evol_instruct_210k](https://huggingface.co/datasets/Norquinal/claude_evol_instruct_210k) | used 250k files from that repo |
| code-alpaca | 64,112 | [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) | original is a coding dataset, but generated data is not mainly a coding dataset |
| helpsteer | 25,253 | [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) | |
| mmlu_abstract_algebra | 88,848 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_all | 97,765 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_anatomy | 97,463 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_astronomy | 97,347 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_business_ethics | 97,327 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_clinical_knowledge | 97,226 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_college_biology | 97,285 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_college_chemistry | 97,435 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_college_computer_science | 92,606 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_college_mathematics | 94,070 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_college_medicine | 95,156 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_college_physics | 97,452 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_computer_security | 97,212 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_conceptual_physics | 88,216 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_econometrics | 91,854 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_electrical_engineering | 87,826 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_elementary_mathematics | 89,307 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_formal_logic | 95,483 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_global_facts | 94,984 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_biology | 97,117 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_chemistry | 96,907 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_computer_science | 97,351 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_european_history | 97,222 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_geography | 97,261 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_government_and_politics | 97,311 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_macroeconomics | 97,400 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_mathematics | 97,396 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_microeconomics | 97,435 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_physics | 95,740 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_psychology | 80,626 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_statistics | 76,033 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_us_history | 79,322 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_high_school_world_history | 85,990 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_human_aging | 78,341 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_human_sexuality | 79,327 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
| mmlu_international_law | 78,989 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | |
## When you find a problem
If you find any issues with the dataset, please let us know in the discussion or send us a pull request. |
tuanphong/ascent_kb | tuanphong | 2024-01-09T14:44:26Z | 112 | 3 | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2011.00905",
"region:us",
"knowledge-base"
] | [
"other"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: ascentkb
pretty_name: Ascent KB
tags:
- knowledge-base
dataset_info:
- config_name: canonical
features:
- name: arg1
dtype: string
- name: rel
dtype: string
- name: arg2
dtype: string
- name: support
dtype: int64
- name: facets
list:
- name: value
dtype: string
- name: type
dtype: string
- name: support
dtype: int64
- name: source_sentences
list:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2976665740
num_examples: 8904060
download_size: 898478552
dataset_size: 2976665740
- config_name: open
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
- name: support
dtype: int64
- name: facets
list:
- name: value
dtype: string
- name: type
dtype: string
- name: support
dtype: int64
- name: source_sentences
list:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2882646222
num_examples: 8904060
download_size: 900156754
dataset_size: 2882646222
configs:
- config_name: canonical
data_files:
- split: train
path: canonical/train-*
default: true
- config_name: open
data_files:
- split: train
path: open/train-*
---
# Dataset Card for Ascent KB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ascent.mpi-inf.mpg.de/
- **Repository:** https://github.com/phongnt570/ascent
- **Paper:** https://arxiv.org/abs/2011.00905
- **Point of Contact:** http://tuan-phong.com
### Dataset Summary
This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.
The current version of Ascent KB (v1.0.0) is approximately **19 times larger than ConceptNet** (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).
For more details, take a look at
[the research paper](https://arxiv.org/abs/2011.00905) and
[the website](https://ascent.mpi-inf.mpg.de).
### Supported Tasks and Leaderboards
The dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
There are two configurations available for this dataset:
1. `canonical` (default): This part contains `<arg1 ; rel ; arg2>`
assertions where the relations (`rel`) were mapped to
[ConceptNet relations](https://github.com/commonsense/conceptnet5/wiki/Relations)
with slight modifications:
- Introducing 2 new relations: `/r/HasSubgroup`, `/r/HasAspect`.
- All `/r/HasA` relations were replaced with `/r/HasAspect`.
This is motivated by the [ATOMIC-2020](https://allenai.org/data/atomic-2020)
schema, although they grouped all `/r/HasA` and
`/r/HasProperty` into `/r/HasProperty`.
- The `/r/UsedFor` relation was replaced with `/r/ObjectUse`
which is broader (could be either _"used for"_, _"used in"_, or _"used as"_, ect.).
This is also taken from ATOMIC-2020.
2. `open`: This part contains open assertions of the form
`<subject ; predicate ; object>` extracted directly from web
contents. This is the original form of the `canonical` triples.
In both configurations, each assertion is equipped with
extra information including: a set of semantic `facets`
(e.g., *LOCATION*, *TEMPORAL*, etc.), its `support` (i.e., number of occurrences),
and a list of `source_sentences`.
An example row in the `canonical` configuration:
```JSON
{
"arg1": "elephant",
"rel": "/r/HasProperty",
"arg2": "intelligent",
"support": 15,
"facets": [
{
"value": "extremely",
"type": "DEGREE",
"support": 11
}
],
"source_sentences": [
{
"text": "Elephants are extremely intelligent animals.",
"source": "https://www.softschools.com/facts/animals/asian_elephant_facts/2310/"
},
{
"text": "Elephants are extremely intelligent creatures and an elephant's brain can weigh as much as 4-6 kg.",
"source": "https://www.elephantsforafrica.org/elephant-facts/"
}
]
}
```
### Data Fields
- **For `canonical` configuration**
- `arg1`: the first argument to the relationship, e.g., *elephant*
- `rel`: the canonical relation, e.g., */r/HasProperty*
- `arg2`: the second argument to the relationship, e.g., *intelligence*
- `support`: the number of occurrences of the assertion, e.g., *15*
- `facets`: an array of semantic facets, each contains
- `value`: facet value, e.g., *extremely*
- `type`: facet type, e.g., *DEGREE*
- `support`: the number of occurrences of the facet, e.g., *11*
- `source_sentences`: an array of source sentences from which the assertion was
extracted, each contains
- `text`: the raw text of the sentence
- `source`: the URL to its parent document
- **For `open` configuration**
- The fields of this configuration are the same as the `canonical`
configuration's, except that
the (`arg1`, `rel`, `arg2`) fields are replaced with the
(`subject`, `predicate`, `object`) fields
which are free
text phrases extracted directly from the source sentences
using an Open Information Extraction (OpenIE) tool.
### Data Splits
There are no splits. All data points come to a default split called `train`.
## Dataset Creation
### Curation Rationale
The commonsense knowledge base was created to assist in development of robust and reliable AI.
### Source Data
#### Initial Data Collection and Normalization
Texts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.
The assertions were then grouped into semantically equivalent clusters.
Take a look at the research paper for more details.
#### Who are the source language producers?
Web users.
### Annotations
#### Annotation process
None.
#### Who are the annotators?
None.
### Personal and Sensitive Information
Unknown.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The knowledge base has been developed by researchers at the
[Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
Contact [Tuan-Phong Nguyen](http://tuan-phong.com) in case of questions and comments.
### Licensing Information
[The Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@InProceedings{nguyen2021www,
title={Advanced Semantics for Commonsense Knowledge Extraction},
author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},
year={2021},
booktitle={The Web Conference 2021},
}
```
### Contributions
Thanks to [@phongnt570](https://github.com/phongnt570) for adding this dataset. |
pauli31/czech-subjectivity-dataset | pauli31 | 2024-01-05T20:05:40Z | 48 | 3 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2204.13915",
"region:us"
] | [
"text-classification"
] | 2022-05-02T18:27:17Z | 1 | ---
annotations_creators: []
language_creators: []
language:
- cs
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Czech Subjectivity Dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for Czech Subjectivity Dataset
### Dataset Summary
Czech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description https://arxiv.org/abs/2204.13915
### Github
https://github.com/pauli31/czech-subjectivity-dataset
### Supported Tasks and Leaderboards
Subjectivity Analysis
### Languages
Czech
### Data Instances
train/dev/test
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
If you use our dataset or software for academic research, please cite the our [paper](https://arxiv.org/abs/2204.13915)
```
@article{pib2022czech,
title={Czech Dataset for Cross-lingual Subjectivity Classification},
author={Pavel Přibáň and Josef Steinberger},
year={2022},
eprint={2204.13915},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contact
[email protected]
### Contributions
Thanks to [@pauli31](https://github.com/pauli31) for adding this dataset. |
google/xquad | google | 2024-01-04T17:08:50Z | 1,431 | 31 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:extended|squad",
"language:ar",
"language:de",
"language:el",
"language:en",
"language:es",
"language:hi",
"language:ro",
"language:ru",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1910.11856",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ar
- de
- el
- en
- es
- hi
- ro
- ru
- th
- tr
- vi
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: xquad
pretty_name: XQuAD
dataset_info:
- config_name: xquad.ar
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1722775
num_examples: 1190
download_size: 263002
dataset_size: 1722775
- config_name: xquad.de
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1283277
num_examples: 1190
download_size: 241957
dataset_size: 1283277
- config_name: xquad.el
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2206666
num_examples: 1190
download_size: 324379
dataset_size: 2206666
- config_name: xquad.en
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1116099
num_examples: 1190
download_size: 212372
dataset_size: 1116099
- config_name: xquad.es
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1273475
num_examples: 1190
download_size: 236874
dataset_size: 1273475
- config_name: xquad.hi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2682951
num_examples: 1190
download_size: 322083
dataset_size: 2682951
- config_name: xquad.ro
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1299426
num_examples: 1190
download_size: 244467
dataset_size: 1299426
- config_name: xquad.ru
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2136966
num_examples: 1190
download_size: 321728
dataset_size: 2136966
- config_name: xquad.th
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 2854935
num_examples: 1190
download_size: 337307
dataset_size: 2854935
- config_name: xquad.tr
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1210739
num_examples: 1190
download_size: 228364
dataset_size: 1210739
- config_name: xquad.vi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 1477215
num_examples: 1190
download_size: 237644
dataset_size: 1477215
- config_name: xquad.zh
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 984217
num_examples: 1190
download_size: 205768
dataset_size: 984217
configs:
- config_name: xquad.ar
data_files:
- split: validation
path: xquad.ar/validation-*
- config_name: xquad.de
data_files:
- split: validation
path: xquad.de/validation-*
- config_name: xquad.el
data_files:
- split: validation
path: xquad.el/validation-*
- config_name: xquad.en
data_files:
- split: validation
path: xquad.en/validation-*
- config_name: xquad.es
data_files:
- split: validation
path: xquad.es/validation-*
- config_name: xquad.hi
data_files:
- split: validation
path: xquad.hi/validation-*
- config_name: xquad.ro
data_files:
- split: validation
path: xquad.ro/validation-*
- config_name: xquad.ru
data_files:
- split: validation
path: xquad.ru/validation-*
- config_name: xquad.th
data_files:
- split: validation
path: xquad.th/validation-*
- config_name: xquad.tr
data_files:
- split: validation
path: xquad.tr/validation-*
- config_name: xquad.vi
data_files:
- split: validation
path: xquad.vi/validation-*
- config_name: xquad.zh
data_files:
- split: validation
path: xquad.zh/validation-*
---
# Dataset Card for "xquad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/xquad](https://github.com/deepmind/xquad)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 146.31 MB
- **Size of the generated dataset:** 18.97 MB
- **Total amount of disk used:** 165.28 MB
### Dataset Summary
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering
performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set
of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German,
Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Consequently, the dataset is entirely parallel
across 11 languages.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### xquad.ar
- **Size of downloaded dataset files:** 13.30 MB
- **Size of the generated dataset:** 1.72 MB
- **Total amount of disk used:** 15.03 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### xquad.de
- **Size of downloaded dataset files:** 13.30 MB
- **Size of the generated dataset:** 1.29 MB
- **Total amount of disk used:** 14.59 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### xquad.el
- **Size of downloaded dataset files:** 13.30 MB
- **Size of the generated dataset:** 2.21 MB
- **Total amount of disk used:** 15.51 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### xquad.en
- **Size of downloaded dataset files:** 13.30 MB
- **Size of the generated dataset:** 1.12 MB
- **Total amount of disk used:** 14.42 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### xquad.es
- **Size of downloaded dataset files:** 13.30 MB
- **Size of the generated dataset:** 1.28 MB
- **Total amount of disk used:** 14.58 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
### Data Fields
The data fields are the same among all splits.
#### xquad.ar
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### xquad.de
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### xquad.el
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### xquad.en
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### xquad.es
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | validation |
| -------- | ---------: |
| xquad.ar | 1190 |
| xquad.de | 1190 |
| xquad.el | 1190 |
| xquad.en | 1190 |
| xquad.es | 1190 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
uclanlp/wino_bias | uclanlp | 2024-01-04T16:50:33Z | 1,832 | 18 | [
"task_categories:token-classification",
"task_ids:coreference-resolution",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1804.06876",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- coreference-resolution
paperswithcode_id: winobias
pretty_name: WinoBias
dataset_info:
- config_name: type1_anti
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 380510
num_examples: 396
- name: test
num_bytes: 402893
num_examples: 396
download_size: 65383
dataset_size: 783403
- config_name: type1_pro
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 379044
num_examples: 396
- name: test
num_bytes: 401705
num_examples: 396
download_size: 65516
dataset_size: 780749
- config_name: type2_anti
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 368421
num_examples: 396
- name: test
num_bytes: 376926
num_examples: 396
download_size: 62555
dataset_size: 745347
- config_name: type2_pro
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 366957
num_examples: 396
- name: test
num_bytes: 375144
num_examples: 396
download_size: 62483
dataset_size: 742101
- config_name: wino_bias
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
- name: verbal_predicates
sequence: string
splits:
- name: train
num_bytes: 173899234
num_examples: 150335
download_size: 268725744
dataset_size: 173899234
configs:
- config_name: type1_anti
data_files:
- split: validation
path: type1_anti/validation-*
- split: test
path: type1_anti/test-*
- config_name: type1_pro
data_files:
- split: validation
path: type1_pro/validation-*
- split: test
path: type1_pro/test-*
- config_name: type2_anti
data_files:
- split: validation
path: type2_anti/validation-*
- split: test
path: type2_anti/test-*
- config_name: type2_pro
data_files:
- split: validation
path: type2_pro/validation-*
- split: test
path: type2_pro/test-*
---
# Dataset Card for Wino_Bias dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WinoBias](https://uclanlp.github.io/corefBias/overview)
- **Repository:**
- **Paper:** [Arxiv](https://arxiv.org/abs/1804.06876)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias.
The corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter).
### Supported Tasks and Leaderboards
The underlying task is coreference resolution.
### Languages
English
## Dataset Structure
### Data Instances
The dataset has 4 subsets: `type1_pro`, `type1_anti`, `type2_pro` and `type2_anti`.
The `*_pro` subsets contain sentences that reinforce gender stereotypes (e.g. mechanics are male, nurses are female), whereas the `*_anti` datasets contain "anti-stereotypical" sentences (e.g. mechanics are female, nurses are male).
The `type1` (*WB-Knowledge*) subsets contain sentences for which world knowledge is necessary to resolve the co-references, and `type2` (*WB-Syntax*) subsets require only the syntactic information present in the sentence to resolve them.
### Data Fields
- document_id = This is a variation on the document filename
- part_number = Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- word_num = This is the word index of the word in that sentence.
- tokens = This is the token as segmented/tokenized in the Treebank.
- pos_tags = This is the Penn Treebank style part of speech. When parse information is missing, all part of speeches except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- parse_bit = This is the bracketed structure broken before the first open parenthesis in the parse, and the word/part-of-speech leaf replaced with a *. The full parse can be created by substituting the asterix with the "([pos] [word])" string (or leaf) and concatenating the items in the rows of that column. When the parse information is missing, the first word of a sentence is tagged as "(TOP*" and the last word is tagged as "*)" and all intermediate words are tagged with a "*".
- predicate_lemma = The predicate lemma is mentioned for the rows for which we have semantic role information or word sense information. All other rows are marked with a "-".
- predicate_framenet_id = This is the PropBank frameset ID of the predicate in predicate_lemma.
- word_sense = This is the word sense of the word in Column tokens.
- speaker = This is the speaker or author name where available.
- ner_tags = These columns identifies the spans representing various named entities. For documents which do not have named entity annotation, each line is represented with an "*".
- verbal_predicates = There is one column each of predicate argument structure information for the predicate mentioned in predicate_lemma. If there are no predicates tagged in a sentence this is a single column with all rows marked with an "*".
### Data Splits
Dev and Test Split available
## Dataset Creation
### Curation Rationale
The WinoBias dataset was introduced in 2018 (see [paper](https://arxiv.org/abs/1804.06876)), with its original task being *coreference resolution*, which is a task that aims to identify mentions that refer to the same entity or person.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The dataset was created by researchers familiar with the WinoBias project, based on two prototypical templates provided by the authors, in which entities interact in plausible ways.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
"Researchers familiar with the [WinoBias] project"
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[Recent work](https://www.microsoft.com/en-us/research/uploads/prod/2021/06/The_Salmon_paper.pdf) has shown that this dataset contains grammatical issues, incorrect or ambiguous labels, and stereotype conflation, among other limitations.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez and Kai-Wei Chan
### Licensing Information
MIT Licence
### Citation Information
@article{DBLP:journals/corr/abs-1804-06876,
author = {Jieyu Zhao and
Tianlu Wang and
Mark Yatskar and
Vicente Ordonez and
Kai{-}Wei Chang},
title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods},
journal = {CoRR},
volume = {abs/1804.06876},
year = {2018},
url = {http://arxiv.org/abs/1804.06876},
archivePrefix = {arXiv},
eprint = {1804.06876},
timestamp = {Mon, 13 Aug 2018 16:47:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-06876.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
### Contributions
Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset. Updated by [@JieyuZhao](https://github.com/JieyuZhao). |
Stanford/web_questions | Stanford | 2024-01-04T16:41:06Z | 4,003 | 32 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: webquestions
pretty_name: WebQuestions
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
splits:
- name: train
num_bytes: 530711
num_examples: 3778
- name: test
num_bytes: 288184
num_examples: 2032
download_size: 402395
dataset_size: 818895
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "web_questions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a](https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Semantic Parsing on Freebase from Question-Answer Pairs](https://aclanthology.org/D13-1160/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.27 MB
- **Size of the generated dataset:** 0.83 MB
- **Total amount of disk used:** 2.10 MB
### Dataset Summary
This dataset consists of 6,642 question/answer pairs.
The questions are supposed to be answerable by Freebase, a large knowledge graph.
The questions are mostly centered around a single named entity.
The questions are popular ones asked on the web (at least in 2013).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.27 MB
- **Size of the generated dataset:** 0.83 MB
- **Total amount of disk used:** 2.10 MB
An example of 'train' looks as follows.
```
{
"answers": ["Jamaican Creole English Language", "Jamaican English"],
"question": "what does jamaican people speak?",
"url": "http://www.freebase.com/view/en/jamaica"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `url`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a `list` of `string` features.
### Data Splits
| name |train|test|
|-------|----:|---:|
|default| 3778|2032|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{berant-etal-2013-semantic,
title = "Semantic Parsing on {F}reebase from Question-Answer Pairs",
author = "Berant, Jonathan and
Chou, Andrew and
Frostig, Roy and
Liang, Percy",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1160",
pages = "1533--1544",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset. |
textmachinelab/quail | textmachinelab | 2024-01-04T16:18:32Z | 17,696 | 7 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: quail
pretty_name: Question Answering for Artificial Intelligence (QuAIL)
dataset_info:
config_name: quail
features:
- name: id
dtype: string
- name: context_id
dtype: string
- name: question_id
dtype: string
- name: domain
dtype: string
- name: metadata
struct:
- name: author
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: answers
sequence: string
- name: correct_answer_id
dtype: int32
splits:
- name: train
num_bytes: 23432601
num_examples: 10246
- name: validation
num_bytes: 4989531
num_examples: 2164
- name: challenge
num_bytes: 1199792
num_examples: 556
download_size: 2286403
dataset_size: 29621924
configs:
- config_name: quail
data_files:
- split: train
path: quail/train-*
- split: validation
path: quail/validation-*
- split: challenge
path: quail/challenge-*
default: true
---
# Dataset Card for "quail"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://text-machine-lab.github.io/blog/2020/quail/](https://text-machine-lab.github.io/blog/2020/quail/)
- **Repository:** https://github.com/text-machine-lab/quail
- **Paper:** [Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks](https://doi.org/10.1609/aaai.v34i05.6398 )
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
### Dataset Summary
QuAIL is a reading comprehension dataset. QuAIL contains 15K multi-choice questions in texts 300-350 tokens long 4 domains (news, user stories, fiction, blogs).QuAIL is balanced and annotated for question types.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### quail
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": ["the cousin is not friendly", "the cousin could have been pretier", "not enough information", "the cousin was too nice"],
"context": "\"That fall came and I went back to Michigan and the school year went by and summer came and I never really thought about it. I'm...",
"context_id": "f001",
"correct_answer_id": 0,
"domain": "fiction",
"id": "f001_19",
"metadata": {
"author": "Joseph Devon",
"title": "Black Eyed Susan",
"url": "http://manybooks.net/pages/devonjother08black_eyed_susan/0.html"
},
"question": "After the events in the text what does the author think about the cousin?",
"question_id": "19",
"question_type": "Subsequent_state"
}
```
### Data Fields
The data fields are the same among all splits.
#### quail
- `id`: a `string` feature.
- `context_id`: a `string` feature.
- `question_id`: a `string` feature.
- `domain`: a `string` feature.
- `author`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `question_type`: a `string` feature.
- `answers`: a `list` of `string` features.
- `correct_answer_id`: a `int32` feature.
### Data Splits
|name |train|challenge|validation|
|-----|----:|--------:|---------:|
|quail|10246| 556| 2164|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{DBLP:conf/aaai/RogersKDR20,
author = {Anna Rogers and
Olga Kovaleva and
Matthew Downey and
Anna Rumshisky},
title = {Getting Closer to {AI} Complete Question Answering: {A} Set of Prerequisite
Real Tasks},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8722--8731},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6398},
timestamp = {Thu, 04 Jun 2020 13:18:48 +0200},
biburl = {https://dblp.org/rec/conf/aaai/RogersKDR20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@sai-prasanna](https://github.com/sai-prasanna), [@ngdodd](https://github.com/ngdodd) for adding this dataset. |
openai/openai_humaneval | openai | 2024-01-04T16:08:05Z | 81,423 | 314 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2107.03374",
"region:us",
"code-generation"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: humaneval
pretty_name: OpenAI HumanEval
tags:
- code-generation
dataset_info:
config_name: openai_humaneval
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
splits:
- name: test
num_bytes: 194394
num_examples: 164
download_size: 83920
dataset_size: 194394
configs:
- config_name: openai_humaneval
data_files:
- split: test
path: openai_humaneval/test-*
default: true
---
# Dataset Card for OpenAI HumanEval
## Table of Contents
- [OpenAI HumanEval](#openai-humaneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/openai/human-eval)
- **Paper:** [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
### Dataset Summary
The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
### Supported Tasks and Leaderboards
### Languages
The programming problems are written in Python and contain English natural text in comments and docstrings.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("openai_humaneval")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
num_rows: 164
})
})
```
### Data Instances
An example of a dataset instance:
```
{
"task_id": "test/0",
"prompt": "def return1():\n",
"canonical_solution": " return 1",
"test": "def check(candidate):\n assert candidate() == 1",
"entry_point": "return1"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
### Data Splits
The dataset only consists of a test split with 164 samples.
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Source Data
The dataset was handcrafted by engineers and researchers at OpenAI.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
OpenAI
### Licensing Information
MIT License
### Citation Information
```
@misc{chen2021evaluating,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
year={2021},
eprint={2107.03374},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset. |
microsoft/ms_marco | microsoft | 2024-01-04T16:01:29Z | 8,083 | 159 | [
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1611.09268",
"region:us"
] | [] | 2022-03-02T23:29:22Z | null | ---
language:
- en
paperswithcode_id: ms-marco
pretty_name: Microsoft Machine Reading Comprehension Dataset
dataset_info:
- config_name: v1.1
features:
- name: answers
sequence: string
- name: passages
sequence:
- name: is_selected
dtype: int32
- name: passage_text
dtype: string
- name: url
dtype: string
- name: query
dtype: string
- name: query_id
dtype: int32
- name: query_type
dtype: string
- name: wellFormedAnswers
sequence: string
splits:
- name: validation
num_bytes: 42665198
num_examples: 10047
- name: train
num_bytes: 350516260
num_examples: 82326
- name: test
num_bytes: 40977580
num_examples: 9650
download_size: 217328153
dataset_size: 434159038
- config_name: v2.1
features:
- name: answers
sequence: string
- name: passages
sequence:
- name: is_selected
dtype: int32
- name: passage_text
dtype: string
- name: url
dtype: string
- name: query
dtype: string
- name: query_id
dtype: int32
- name: query_type
dtype: string
- name: wellFormedAnswers
sequence: string
splits:
- name: validation
num_bytes: 413765365
num_examples: 101093
- name: train
num_bytes: 3462807709
num_examples: 808731
- name: test
num_bytes: 405691932
num_examples: 101092
download_size: 2105722550
dataset_size: 4282265006
configs:
- config_name: v1.1
data_files:
- split: validation
path: v1.1/validation-*
- split: train
path: v1.1/train-*
- split: test
path: v1.1/test-*
- config_name: v2.1
data_files:
- split: validation
path: v2.1/validation-*
- split: train
path: v2.1/train-*
- split: test
path: v2.1/test-*
---
# Dataset Card for "ms_marco"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://microsoft.github.io/msmarco/](https://microsoft.github.io/msmarco/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.55 GB
- **Size of the generated dataset:** 4.72 GB
- **Total amount of disk used:** 6.28 GB
### Dataset Summary
Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
keyphrase extraction dataset, crawling dataset, and a conversational search.
There have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking
submissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions
This data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1).
The original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.
The current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and
is much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and
builds upon the QnA dataset to deliver answers that could be spoken by a smart speaker.
version v1.1
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### v1.1
- **Size of downloaded dataset files:** 168.69 MB
- **Size of the generated dataset:** 434.61 MB
- **Total amount of disk used:** 603.31 MB
An example of 'train' looks as follows.
```
```
#### v2.1
- **Size of downloaded dataset files:** 1.38 GB
- **Size of the generated dataset:** 4.29 GB
- **Total amount of disk used:** 5.67 GB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### v1.1
- `answers`: a `list` of `string` features.
- `passages`: a dictionary feature containing:
- `is_selected`: a `int32` feature.
- `passage_text`: a `string` feature.
- `url`: a `string` feature.
- `query`: a `string` feature.
- `query_id`: a `int32` feature.
- `query_type`: a `string` feature.
- `wellFormedAnswers`: a `list` of `string` features.
#### v2.1
- `answers`: a `list` of `string` features.
- `passages`: a dictionary feature containing:
- `is_selected`: a `int32` feature.
- `passage_text`: a `string` feature.
- `url`: a `string` feature.
- `query`: a `string` feature.
- `query_id`: a `int32` feature.
- `query_type`: a `string` feature.
- `wellFormedAnswers`: a `list` of `string` features.
### Data Splits
|name|train |validation| test |
|----|-----:|---------:|-----:|
|v1.1| 82326| 10047| 9650|
|v2.1|808731| 101093|101092|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/NguyenRSGTMD16,
author = {Tri Nguyen and
Mir Rosenberg and
Xia Song and
Jianfeng Gao and
Saurabh Tiwary and
Rangan Majumder and
Li Deng},
title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
journal = {CoRR},
volume = {abs/1611.09268},
year = {2016},
url = {http://arxiv.org/abs/1611.09268},
archivePrefix = {arXiv},
eprint = {1611.09268},
timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset. |
tdavidson/hate_speech_offensive | tdavidson | 2024-01-04T12:06:17Z | 1,339 | 31 | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1703.04009",
"region:us",
"hate-speech-detection"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: hate-speech-and-offensive-language
pretty_name: Hate Speech and Offensive Language
tags:
- hate-speech-detection
dataset_info:
features:
- name: count
dtype: int64
- name: hate_speech_count
dtype: int64
- name: offensive_language_count
dtype: int64
- name: neither_count
dtype: int64
- name: class
dtype:
class_label:
names:
'0': hate speech
'1': offensive language
'2': neither
- name: tweet
dtype: string
splits:
- name: train
num_bytes: 3207814
num_examples: 24783
download_size: 1627672
dataset_size: 3207814
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
col_mapping:
tweet: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/t-davidson/hate-speech-and-offensive-language
- **Repository:** https://github.com/t-davidson/hate-speech-and-offensive-language
- **Paper:** https://arxiv.org/abs/1703.04009
- **Leaderboard:**
- **Point of Contact:** https://docs.google.com/forms/d/e/1FAIpQLSdrPNlfVBlqxun2tivzAtsZaOoPC5YYMocn-xscCgeRakLXHg/viewform?usp=pp_url&entry.1506871634&entry.147453066&entry.1390333885&entry.516829772
### Dataset Summary
An annotated dataset for hate speech and offensive language detection on tweets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`)
## Dataset Structure
### Data Instances
```
{
"count": 3,
"hate_speech_annotation": 0,
"offensive_language_annotation": 0,
"neither_annotation": 3,
"label": 2, # "neither"
"tweet": "!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...")
}
```
### Data Fields
```
count: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable,
hate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech,
offensive_language_annotation: (Integer) number of users who judged the tweet to be offensive,
neither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive,
label: (Class Label) class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'),
tweet: (string)
```
### Data Splits
This dataset is not splitted, only the train split is available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Usernames are not anonymized in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT License
### Citation Information
@inproceedings{hateoffensive,
title = {Automated Hate Speech Detection and the Problem of Offensive Language},
author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar},
booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media},
series = {ICWSM '17},
year = {2017},
location = {Montreal, Canada},
pages = {512-515}
}
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
allenai/common_gen | allenai | 2024-01-04T07:34:57Z | 1,047 | 26 | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1911.03705",
"region:us",
"concepts-to-text"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: commongen
pretty_name: CommonGen
tags:
- concepts-to-text
dataset_info:
features:
- name: concept_set_idx
dtype: int32
- name: concepts
sequence: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 6724166
num_examples: 67389
- name: validation
num_bytes: 408740
num_examples: 4018
- name: test
num_bytes: 77518
num_examples: 1497
download_size: 3434865
dataset_size: 7210424
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "common_gen"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://inklab.usc.edu/CommonGen/index.html](https://inklab.usc.edu/CommonGen/index.html)
- **Repository:** https://github.com/INK-USC/CommonGen
- **Paper:** [CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning](https://arxiv.org/abs/1911.03705)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.85 MB
- **Size of the generated dataset:** 7.21 MB
- **Total amount of disk used:** 9.06 MB
### Dataset Summary
CommonGen is a constrained text generation task, associated with a benchmark dataset,
to explicitly test machines for the ability of generative commonsense reasoning. Given
a set of common concepts; the task is to generate a coherent sentence describing an
everyday scenario using these concepts.
CommonGen is challenging because it inherently requires 1) relational reasoning using
background commonsense knowledge, and 2) compositional generalization ability to work
on unseen concept combinations. Our dataset, constructed through a combination of
crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and
50k sentences in total.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.85 MB
- **Size of the generated dataset:** 7.21 MB
- **Total amount of disk used:** 9.06 MB
An example of 'train' looks as follows.
```
{
"concept_set_idx": 0,
"concepts": ["ski", "mountain", "skier"],
"target": "Three skiers are skiing on a snowy mountain."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `concept_set_idx`: a `int32` feature.
- `concepts`: a `list` of `string` features.
- `target`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|67389| 4018|1497|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under [MIT License](https://github.com/INK-USC/CommonGen/blob/master/LICENSE).
### Citation Information
```bib
@inproceedings{lin-etal-2020-commongen,
title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning",
author = "Lin, Bill Yuchen and
Zhou, Wangchunshu and
Shen, Ming and
Zhou, Pei and
Bhagavatula, Chandra and
Choi, Yejin and
Ren, Xiang",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165",
doi = "10.18653/v1/2020.findings-emnlp.165",
pages = "1823--1840"
}
```
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner), [@yuchenlin](https://github.com/yuchenlin), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
openbmb/UltraFeedback | openbmb | 2023-12-29T14:11:19Z | 2,366 | 360 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.01377",
"region:us"
] | [
"text-generation"
] | 2023-09-23T15:41:04Z | null | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
## Introduction
- [GitHub Repo](https://github.com/thunlp/UltraFeedback)
- [UltraRM-13b](https://huggingface.co/openbmb/UltraRM-13b)
- [UltraCM-13b](https://huggingface.co/openbmb/UltraCM-13b)
UltraFeedback is a **large-scale, fine-grained, diverse preference dataset**, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.
To collect high-quality preference and textual feedback, we design a fine-grained annotation instruction, which contains 4 different aspects, namely **instruction-following**, **truthfulness**, **honesty** and **helpfulness**. We then ask GPT-4 to annotate the collected samples based on the instructions.
## Features
- 🆚 **Scale**: UltraFeedback consists of 64k prompts, 256k responses and 380k high-quality feedback. RLHF researchers could further construct around 1 million comparison pairs to train their reward models.
- 🌈 **Diversity**: As a preference dataset, diversity is the core requirement for UltraFeedback. We collect prompts from various sources and query a diverse set of state-of-the-art open-source and prestigious models. To further increase diversity, we intended to select different base models, i.e., LLaMA, Falcon, StarChat, MPT, GPT and Bard. We also apply various principles to stimulate models completing instructions in different ways.
- 🤯 **High-density**: UltraFeedback provides both numerical and textual feedback. Moreover, we wrote fine-grained annotation documents to help rate responses in all dimensions
## Dataset Construction
### Instruction Sampling
We sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from"Co" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.
```json
{
"evol_instruct": 10000,
"false_qa": 2339,
"flan": 20939,
"sharegpt": 19949,
"truthful_qa": 811,
"ultrachat": 9929
}
```
### Model Sampling
To prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:
- Commercial Models: GPT-4, GPT-3.5 Turbo, Bard
- LLaMA family:
1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat
2. UltraLM-13B, UltraLM-65B
3. WizardLM-7B, WizardLM-13B, WizardLM-70B
4. Vicuna-33B
5. Alpaca-7B
- Non-LLaMA series:
1. Falcon-40B-instruct
2. MPT-30B-chat
3. StarChat-Beta
4. Pythia-12B
### Principle Sampling
Following [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.
| Datset | Principle |
| ------------- | ------------------------------------------------------------ |
| Evol Instruct | 100% Helpful |
| FalseQA | 100% TruthfulQA |
| Flan | 60% Helpful, 20% Truthful, 20% Verbalized Calibration |
| ShareGPT | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
| TruthfulQA | 100% Truthful |
| UltraChat | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.
[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.
## Dataset Format
```JSONC
{
"source": "sharegpt", // the dataset where the instruction comes from
"id": "D2TC9GP_0", // the id of the instruction in the source dataset
"instruction": "I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports",
"models": [ // four models that are randomly sampled from the model pool to complete the instruction
"falcon-40b-instruct",
"gpt-4",
"starchat",
"wizardlm-7b"
],
"completions": [ // four completions corresponding to the four models
{ // completion 1
"model": "falcon-40b-instruct",
"principle": "helpfulness", // priciple that used to align model behavior
"custom_system_prompt": "As an AI assistant, ensure your response offers the perfect blend of accuracy, positivity, and intrigue. Strive to be educational while keeping the user engaged.", // the system prompt corresponding to the principle, sampled from a pool of GPT-4 generated system prompts
"response": "Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!",
"annotations": {
"instruction_following": [
{
"Rating": "2",
"Rationale": "The text only partially addresses the task goal by providing a general list of activities in Cairo without considering the specific needs of the family members, such as limited walking ability and adventure preferences."
}
],
"honesty": [
{
"Rating": "3",
"Rationale": "The response is confident but contains subtle mistakes. It suggests whitewater rafting on the Nile and diving in the Red Sea, which are not feasible for a 4-5 day trip to Cairo. The rest of the suggestions are accurate and relevant to the family's preferences."
}
],
"truthfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The text suggests whitewater rafting on the Nile and diving in the Red Sea, which are not available in Cairo. These activities are factually incorrect and not aligned with the instruction.",
"Rating": "3",
"Rationale For Rating": "The text provides some truthful information, but the inclusion of whitewater rafting and diving in the Red Sea causes partial hallucination and misunderstanding."
}
],
"helpfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The response is clear and relevant, providing useful information, but it does not address the limited walking ability of the grandparents or the adventure sports preferences of the sister, mom, and the user.",
"Rating": "3",
"Rationale For Rating": "The text is correct and provides useful information, but it does not fully address the specific needs and preferences of the family."
}
]
}
},
{ // completion 2
...
},
{ // completion 3
...
},
{ // completion 4
...
}
]
},
```
## Dataset Example
Here we present an example of UltraFeedback
> **User:** I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports
> **Assisstant 1 (falcon-40b-instruct):** Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!
> **Assisstant 2 (gpt-4):** In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\n\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\n\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\n\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\n\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\n\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\n\nFor adventure activities, you and your mom and sister can explore these options:\n\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\n\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\n\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\n\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.
> **Assisstant 3 (starchat):** Here are some of the top things to do in Cairo, Egypt, with a family of 5:\n\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\n\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\n\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\n\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\n\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\n\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.
> **Assisstant 4 (wizardlm-7b):** If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.
## Limitations
- Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks.
## Citation
```bib
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
jat-project/jat-dataset-tokenized | jat-project | 2023-12-22T22:17:42Z | 661,795 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-12-16T10:10:31Z | null | ---
dataset_info:
- config_name: atari-alien
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51686398456
num_examples: 14134
- name: test
num_bytes: 5412188320
num_examples: 1480
download_size: 847071867
dataset_size: 57098586776
- config_name: atari-amidar
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52362921996
num_examples: 14319
- name: test
num_bytes: 4808802460
num_examples: 1315
download_size: 645217608
dataset_size: 57171724456
- config_name: atari-assault
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52757865468
num_examples: 14427
- name: test
num_bytes: 4421172756
num_examples: 1209
download_size: 253415283
dataset_size: 57179038224
- config_name: atari-asterix
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52863915104
num_examples: 14456
- name: test
num_bytes: 5137922020
num_examples: 1405
download_size: 293282697
dataset_size: 58001837124
- config_name: atari-asteroids
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52468971632
num_examples: 14348
- name: test
num_bytes: 3605687624
num_examples: 986
download_size: 316908651
dataset_size: 56074659256
- config_name: atari-atlantis
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52384863300
num_examples: 14325
- name: test
num_bytes: 3975032908
num_examples: 1087
download_size: 274032418
dataset_size: 56359896208
- config_name: atari-bankheist
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51807075628
num_examples: 14167
- name: test
num_bytes: 5836386864
num_examples: 1596
download_size: 879900687
dataset_size: 57643462492
- config_name: atari-battlezone
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51126895204
num_examples: 13981
- name: test
num_bytes: 6092368744
num_examples: 1666
download_size: 530266996
dataset_size: 57219263948
- config_name: atari-beamrider
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49155834728
num_examples: 13442
- name: test
num_bytes: 7880585020
num_examples: 2155
download_size: 427025312
dataset_size: 57036419748
- config_name: atari-berzerk
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49492268056
num_examples: 13534
- name: test
num_bytes: 6172820192
num_examples: 1688
download_size: 351445377
dataset_size: 55665088248
- config_name: atari-bowling
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51598633240
num_examples: 14110
- name: test
num_bytes: 5898553892
num_examples: 1613
download_size: 163624131
dataset_size: 57497187132
- config_name: atari-boxing
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 53178407128
num_examples: 14542
- name: test
num_bytes: 5883926356
num_examples: 1609
download_size: 662704435
dataset_size: 59062333484
- config_name: atari-breakout
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49272855016
num_examples: 13474
- name: test
num_bytes: 6611646272
num_examples: 1808
download_size: 265049647
dataset_size: 55884501288
- config_name: atari-centipede
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51913125264
num_examples: 14196
- name: test
num_bytes: 6026544832
num_examples: 1648
download_size: 269104472
dataset_size: 57939670096
- config_name: atari-choppercommand
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 48991274948
num_examples: 13397
- name: test
num_bytes: 7156521988
num_examples: 1957
download_size: 425086559
dataset_size: 56147796936
- config_name: atari-crazyclimber
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51291454984
num_examples: 14026
- name: test
num_bytes: 5712052808
num_examples: 1562
download_size: 458314909
dataset_size: 57003507792
- config_name: atari-defender
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49382561536
num_examples: 13504
- name: test
num_bytes: 6172820192
num_examples: 1688
download_size: 217534779
dataset_size: 55555381728
- config_name: atari-demonattack
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49364277116
num_examples: 13499
- name: test
num_bytes: 6172820192
num_examples: 1688
download_size: 209141226
dataset_size: 55537097308
- config_name: atari-doubledunk
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5799818024
num_examples: 1586
- name: train
num_bytes: 52264186128
num_examples: 14292
download_size: 585265286
dataset_size: 58064004152
- config_name: atari-enduro
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 48490281840
num_examples: 13260
- name: test
num_bytes: 6172820192
num_examples: 1688
download_size: 696314069
dataset_size: 54663102032
- config_name: atari-fishingderby
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51463328532
num_examples: 14073
- name: test
num_bytes: 6085054976
num_examples: 1664
download_size: 817608846
dataset_size: 57548383508
- config_name: atari-freeway
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51254886144
num_examples: 14016
- name: test
num_bytes: 5851014400
num_examples: 1600
download_size: 684669809
dataset_size: 57105900544
- config_name: atari-frostbite
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51470642300
num_examples: 14075
- name: test
num_bytes: 5898553892
num_examples: 1613
download_size: 629892834
dataset_size: 57369196192
- config_name: atari-gopher
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 48062426412
num_examples: 13143
- name: test
num_bytes: 6436115840
num_examples: 1760
download_size: 278315347
dataset_size: 54498542252
- config_name: atari-gravitar
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52677414020
num_examples: 14405
- name: test
num_bytes: 5927808964
num_examples: 1621
download_size: 297931288
dataset_size: 58605222984
- config_name: atari-hero
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51357278896
num_examples: 14044
- name: test
num_bytes: 5891240124
num_examples: 1611
download_size: 467961084
dataset_size: 57248519020
- config_name: atari-icehockey
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51258543028
num_examples: 14017
- name: test
num_bytes: 5876612588
num_examples: 1607
download_size: 369055326
dataset_size: 57135155616
- config_name: atari-jamesbond
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 46361975352
num_examples: 12678
- name: test
num_bytes: 10352638604
num_examples: 2831
download_size: 485679287
dataset_size: 56714613956
- config_name: atari-kangaroo
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52103283232
num_examples: 14248
- name: test
num_bytes: 5638915128
num_examples: 1542
download_size: 427266047
dataset_size: 57742198360
- config_name: atari-krull
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51942380336
num_examples: 14204
- name: test
num_bytes: 5807131792
num_examples: 1588
download_size: 1439632028
dataset_size: 57749512128
- config_name: atari-kungfumaster
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51306082520
num_examples: 14030
- name: test
num_bytes: 6136251352
num_examples: 1678
download_size: 689596673
dataset_size: 57442333872
- config_name: atari-montezumarevenge
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51997233596
num_examples: 14219
- name: test
num_bytes: 5924152080
num_examples: 1620
download_size: 739361910
dataset_size: 57921385676
- config_name: atari-mspacman
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51635202080
num_examples: 14120
- name: test
num_bytes: 5664513316
num_examples: 1549
download_size: 867194250
dataset_size: 57299715396
- config_name: atari-namethisgame
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49642200300
num_examples: 13575
- name: test
num_bytes: 6874941920
num_examples: 1880
download_size: 520921217
dataset_size: 56517142220
- config_name: atari-phoenix
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49510552476
num_examples: 13539
- name: test
num_bytes: 6172820192
num_examples: 1688
download_size: 241965818
dataset_size: 55683372668
- config_name: atari-pitfall
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 52245901708
num_examples: 14287
- name: test
num_bytes: 4812459344
num_examples: 1316
download_size: 385040106
dataset_size: 57058361052
- config_name: atari-pong
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5894897008
num_examples: 1612
- name: train
num_bytes: 51748565484
num_examples: 14151
download_size: 128206463
dataset_size: 57643462492
- config_name: atari-privateeye
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5902210776
num_examples: 1614
- name: train
num_bytes: 51580348820
num_examples: 14105
download_size: 762572093
dataset_size: 57482559596
- config_name: atari-qbert
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5715709692
num_examples: 1563
- name: train
num_bytes: 51291454984
num_examples: 14026
download_size: 697728392
dataset_size: 57007164676
- config_name: atari-riverraid
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5437786508
num_examples: 1487
- name: train
num_bytes: 52202019100
num_examples: 14275
download_size: 685859297
dataset_size: 57639805608
- config_name: atari-roadrunner
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5774219836
num_examples: 1579
- name: train
num_bytes: 51660800268
num_examples: 14127
download_size: 463497648
dataset_size: 57435020104
- config_name: atari-robotank
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5090382528
num_examples: 1392
- name: train
num_bytes: 51485269836
num_examples: 14079
download_size: 471559799
dataset_size: 56575652364
- config_name: atari-seaquest
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5730337228
num_examples: 1567
- name: train
num_bytes: 51551093748
num_examples: 14097
download_size: 328551402
dataset_size: 57281430976
- config_name: atari-skiing
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 53785449872
num_examples: 14708
- name: test
num_bytes: 6000946644
num_examples: 1641
download_size: 567502031
dataset_size: 59786396516
- config_name: atari-solaris
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51924095916
num_examples: 14199
- name: test
num_bytes: 5233001004
num_examples: 1431
download_size: 492333967
dataset_size: 57157096920
- config_name: atari-spaceinvaders
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 46266896368
num_examples: 12652
- name: test
num_bytes: 9548124124
num_examples: 2611
download_size: 300389865
dataset_size: 55815020492
- config_name: atari-stargunner
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 50545450648
num_examples: 13822
- name: test
num_bytes: 5865641936
num_examples: 1604
download_size: 203075318
dataset_size: 56411092584
- config_name: atari-surround
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 50611274560
num_examples: 13840
- name: test
num_bytes: 6381262580
num_examples: 1745
download_size: 286861481
dataset_size: 56992537140
- config_name: atari-tennis
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 51423102808
num_examples: 14062
- name: test
num_bytes: 5675483968
num_examples: 1552
download_size: 407941157
dataset_size: 57098586776
- config_name: atari-timepilot
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 50816060064
num_examples: 13896
- name: test
num_bytes: 5759592300
num_examples: 1575
download_size: 285156447
dataset_size: 56575652364
- config_name: atari-tutankham
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 47981974964
num_examples: 13121
- name: test
num_bytes: 8140223784
num_examples: 2226
download_size: 382912419
dataset_size: 56122198748
- config_name: atari-upndown
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 49382561536
num_examples: 13504
- name: test
num_bytes: 6172820192
num_examples: 1688
download_size: 1690613769
dataset_size: 55555381728
- config_name: atari-venture
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 5313452452
num_examples: 1453
- name: train
num_bytes: 52147165840
num_examples: 14260
download_size: 509488474
dataset_size: 57460618292
- config_name: atari-videopinball
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 1996658664
num_examples: 546
- name: train
num_bytes: 52191048448
num_examples: 14272
download_size: 605138140
dataset_size: 54187707112
- config_name: atari-wizardofwor
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 6033858600
num_examples: 1650
- name: train
num_bytes: 50903825280
num_examples: 13920
download_size: 646859311
dataset_size: 56937683880
- config_name: atari-yarsrevenge
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 6000946644
num_examples: 1641
- name: train
num_bytes: 51126895204
num_examples: 13981
download_size: 1424379144
dataset_size: 57127841848
- config_name: atari-zaxxon
features:
- name: image_observations
sequence:
sequence:
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 6088711860
num_examples: 1665
- name: train
num_bytes: 50585676372
num_examples: 13833
download_size: 452125956
dataset_size: 56674388232
- config_name: babyai-action-obj-door
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 128870282
dataset_size: 43957200000
- config_name: babyai-blocked-unlock-pickup
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 137033255
dataset_size: 43957200000
- config_name: babyai-boss-level
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2236102764
num_examples: 5087
- name: train
num_bytes: 42505293684
num_examples: 96697
download_size: 344912338
dataset_size: 44741396448
- config_name: babyai-boss-level-no-unlock
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2217640740
num_examples: 5045
- name: train
num_bytes: 42103964448
num_examples: 95784
download_size: 339304020
dataset_size: 44321605188
- config_name: babyai-find-obj-s5
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 133212544
dataset_size: 43957200000
- config_name: babyai-go-to
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 233927543
dataset_size: 43957200000
- config_name: babyai-go-to-door
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 118992586
dataset_size: 43957200000
- config_name: babyai-go-to-imp-unlock
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 43664005476
num_examples: 99333
- name: test
num_bytes: 891012444
num_examples: 2027
download_size: 366460821
dataset_size: 44555017920
- config_name: babyai-go-to-local
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 130476854
dataset_size: 43957200000
- config_name: babyai-go-to-obj
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 122037932
dataset_size: 43957200000
- config_name: babyai-go-to-obj-door
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 133904822
dataset_size: 43957200000
- config_name: babyai-go-to-red-ball
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 107941553
dataset_size: 43957200000
- config_name: babyai-go-to-red-ball-grey
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 108701381
dataset_size: 43957200000
- config_name: babyai-go-to-red-ball-no-dists
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 100751341
dataset_size: 43957200000
- config_name: babyai-go-to-red-blue-ball
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41759340000
num_examples: 95000
- name: test
num_bytes: 2197860000
num_examples: 5000
download_size: 109835377
dataset_size: 43957200000
- config_name: babyai-go-to-seq
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 41792307900
num_examples: 95075
- name: test
num_bytes: 2198739144
num_examples: 5002
download_size: 288118166
dataset_size: 43991047044
- config_name: babyai-key-corridor
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 273451937
dataset_size: 43957200000
- config_name: babyai-mini-boss-level
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2200497432
num_examples: 5006
- name: train
num_bytes: 41821759224
num_examples: 95142
download_size: 167867886
dataset_size: 44022256656
- config_name: babyai-move-two-across-s8n9
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 268471454
dataset_size: 43957200000
- config_name: babyai-one-room-s8
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 101603110
dataset_size: 43957200000
- config_name: babyai-open
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 181194361
dataset_size: 43957200000
- config_name: babyai-open-door
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 127824190
dataset_size: 43957200000
- config_name: babyai-open-doors-order-n4
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 127418529
dataset_size: 43957200000
- config_name: babyai-open-red-door
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 78248393
dataset_size: 43957200000
- config_name: babyai-open-two-doors
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 130542191
dataset_size: 43957200000
- config_name: babyai-pickup
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 236053290
dataset_size: 43957200000
- config_name: babyai-pickup-above
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 163058824
dataset_size: 43957200000
- config_name: babyai-pickup-dist
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2077856844
num_examples: 4727
- name: train
num_bytes: 39403234080
num_examples: 89640
download_size: 114895484
dataset_size: 41481090924
- config_name: babyai-pickup-loc
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 134221714
dataset_size: 43957200000
- config_name: babyai-put-next
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 43078056000
num_examples: 98000
- name: test
num_bytes: 879144000
num_examples: 2000
download_size: 169889411
dataset_size: 43957200000
- config_name: babyai-put-next-local
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 43078056000
num_examples: 98000
- name: test
num_bytes: 879144000
num_examples: 2000
download_size: 157089711
dataset_size: 43957200000
- config_name: babyai-synth
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41765054436
num_examples: 95013
download_size: 231769022
dataset_size: 43962914436
- config_name: babyai-synth-loc
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2198739144
num_examples: 5002
- name: train
num_bytes: 41766373152
num_examples: 95016
download_size: 245211619
dataset_size: 43965112296
- config_name: babyai-synth-seq
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2207530584
num_examples: 5022
- name: train
num_bytes: 41981763432
num_examples: 95506
download_size: 326087180
dataset_size: 44189294016
- config_name: babyai-unblock-pickup
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41765933580
num_examples: 95015
download_size: 241680488
dataset_size: 43963793580
- config_name: babyai-unlock
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 43259159664
num_examples: 98412
- name: test
num_bytes: 883979292
num_examples: 2011
download_size: 328757743
dataset_size: 44143138956
- config_name: babyai-unlock-local
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 116723486
dataset_size: 43957200000
- config_name: babyai-unlock-pickup
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 2197860000
num_examples: 5000
- name: train
num_bytes: 41759340000
num_examples: 95000
download_size: 137214787
dataset_size: 43957200000
- config_name: babyai-unlock-to-unlock
features:
- name: discrete_observations
sequence:
sequence: int64
- name: discrete_actions
sequence: int64
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 43078056000
num_examples: 98000
- name: test
num_bytes: 879144000
num_examples: 2000
download_size: 158735389
dataset_size: 43957200000
- config_name: conceptual-captions
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence: float32
- name: loss_weight
sequence: float32
splits:
- name: test
num_bytes: 7574631480
num_examples: 12465
- name: train
num_bytes: 303836000000
num_examples: 500000
download_size: 82071298648
dataset_size: 311410631480
- config_name: metaworld-assembly
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 64267084
dataset_size: 851910400
- config_name: metaworld-basketball
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 162412290
dataset_size: 851910400
- config_name: metaworld-bin-picking
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 168127631
dataset_size: 851910400
- config_name: metaworld-box-close
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 174656572
dataset_size: 851910400
- config_name: metaworld-button-press
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 106951062
dataset_size: 851910400
- config_name: metaworld-button-press-topdown
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 117078197
dataset_size: 851910400
- config_name: metaworld-button-press-topdown-wall
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 119641275
dataset_size: 851910400
- config_name: metaworld-button-press-wall
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 112458551
dataset_size: 851910400
- config_name: metaworld-coffee-button
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 112608052
dataset_size: 851910400
- config_name: metaworld-coffee-pull
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 161591807
dataset_size: 851910400
- config_name: metaworld-coffee-push
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 173247466
dataset_size: 851910400
- config_name: metaworld-dial-turn
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 102519630
dataset_size: 851910400
- config_name: metaworld-disassemble
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 72920062
dataset_size: 851910400
- config_name: metaworld-door-close
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 153530521
dataset_size: 851910400
- config_name: metaworld-door-lock
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 123855874
dataset_size: 851910400
- config_name: metaworld-door-open
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 140905068
dataset_size: 851910400
- config_name: metaworld-door-unlock
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 121700706
dataset_size: 851910400
- config_name: metaworld-drawer-close
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 101417660
dataset_size: 851910400
- config_name: metaworld-drawer-open
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 96573298
dataset_size: 851910400
- config_name: metaworld-faucet-close
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 89353472
dataset_size: 851910400
- config_name: metaworld-faucet-open
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 96651789
dataset_size: 851910400
- config_name: metaworld-hammer
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 177539984
dataset_size: 851910400
- config_name: metaworld-hand-insert
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 135665012
dataset_size: 851910400
- config_name: metaworld-handle-press
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 103407785
dataset_size: 851910400
- config_name: metaworld-handle-press-side
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 103403469
dataset_size: 851910400
- config_name: metaworld-handle-pull
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 121440284
dataset_size: 851910400
- config_name: metaworld-handle-pull-side
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 118413651
dataset_size: 851910400
- config_name: metaworld-lever-pull
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 168776851
dataset_size: 851910400
- config_name: metaworld-peg-insert-side
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 153705593
dataset_size: 851910400
- config_name: metaworld-peg-unplug-side
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 171742157
dataset_size: 851910400
- config_name: metaworld-pick-out-of-hole
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 22274303
dataset_size: 851910400
- config_name: metaworld-pick-place
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 176678495
dataset_size: 851910400
- config_name: metaworld-pick-place-wall
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 172257534
dataset_size: 851910400
- config_name: metaworld-plate-slide
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 114432287
dataset_size: 851910400
- config_name: metaworld-plate-slide-back
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 36662627
dataset_size: 851910400
- config_name: metaworld-plate-slide-back-side
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 33762161
dataset_size: 851910400
- config_name: metaworld-plate-slide-side
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 106392923
dataset_size: 851910400
- config_name: metaworld-push
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 166180034
dataset_size: 851910400
- config_name: metaworld-push-back
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 133027374
dataset_size: 851910400
- config_name: metaworld-push-wall
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 158267234
dataset_size: 851910400
- config_name: metaworld-reach
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 168663459
dataset_size: 851910400
- config_name: metaworld-reach-wall
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 171608203
dataset_size: 851910400
- config_name: metaworld-shelf-place
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 142334952
dataset_size: 851910400
- config_name: metaworld-soccer
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 159081606
dataset_size: 851910400
- config_name: metaworld-stick-pull
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 170289154
dataset_size: 851910400
- config_name: metaworld-stick-push
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 166125948
dataset_size: 851910400
- config_name: metaworld-sweep
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 164632354
dataset_size: 851910400
- config_name: metaworld-sweep-into
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 135177252
dataset_size: 851910400
- config_name: metaworld-window-close
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 95044772
dataset_size: 851910400
- config_name: metaworld-window-open
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 774464000
num_examples: 16000
- name: test
num_bytes: 77446400
num_examples: 1600
download_size: 95793720
dataset_size: 851910400
- config_name: mujoco-ant
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 1420167204
num_examples: 35317
- name: test
num_bytes: 158435280
num_examples: 3940
download_size: 1513512326
dataset_size: 1578602484
- config_name: mujoco-doublependulum
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 599126920
num_examples: 35962
- name: test
num_bytes: 66490060
num_examples: 3991
download_size: 458306888
dataset_size: 665616980
- config_name: mujoco-halfcheetah
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 1005264000
num_examples: 36000
- name: test
num_bytes: 111696000
num_examples: 4000
download_size: 1055030042
dataset_size: 1116960000
- config_name: mujoco-hopper
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 377714520
num_examples: 20190
- name: test
num_bytes: 41774964
num_examples: 2233
download_size: 343653363
dataset_size: 419489484
- config_name: mujoco-humanoid
features:
- name: continuous_observations
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 13565692988
num_examples: 33347
- name: test
num_bytes: 1509649644
num_examples: 3711
download_size: 10439047554
dataset_size: 15075342632
- config_name: mujoco-pendulum
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 201391764
num_examples: 21217
- name: test
num_bytes: 22334676
num_examples: 2353
download_size: 134650231
dataset_size: 223726440
- config_name: mujoco-pusher
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 315828000
num_examples: 9000
- name: test
num_bytes: 35092000
num_examples: 1000
download_size: 134738418
dataset_size: 350920000
- config_name: mujoco-reacher
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 159156000
num_examples: 9000
- name: test
num_bytes: 17684000
num_examples: 1000
download_size: 38441946
dataset_size: 176840000
- config_name: mujoco-standup
features:
- name: rewards
sequence: float32
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 14644944000
num_examples: 36000
- name: test
num_bytes: 1627216000
num_examples: 4000
download_size: 11711102671
dataset_size: 16272160000
- config_name: mujoco-swimmer
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 526032000
num_examples: 36000
- name: test
num_bytes: 58448000
num_examples: 4000
download_size: 519559720
dataset_size: 584480000
- config_name: mujoco-walker
features:
- name: continuous_observations
sequence:
sequence: float32
- name: continuous_actions
sequence:
sequence: float32
- name: rewards
sequence: float32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 944529300
num_examples: 33825
- name: test
num_bytes: 104798772
num_examples: 3753
download_size: 954326371
dataset_size: 1049328072
- config_name: ok-vqa
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence: float32
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 5474517048
num_examples: 9009
- name: test
num_bytes: 3066312912
num_examples: 5046
download_size: 2461083826
dataset_size: 8540829960
- config_name: oscar
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 58269773100
num_examples: 12612505
- name: test
num_bytes: 63899220
num_examples: 13831
download_size: 10788173669
dataset_size: 58333672320
- config_name: wikipedia
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: loss_weight
sequence: float32
splits:
- name: train
num_bytes: 59293939320
num_examples: 12834186
- name: test
num_bytes: 58216620
num_examples: 12601
download_size: 10100547139
dataset_size: 59352155940
configs:
- config_name: atari-alien
data_files:
- split: train
path: atari-alien/train-*
- split: test
path: atari-alien/test-*
- config_name: atari-amidar
data_files:
- split: train
path: atari-amidar/train-*
- split: test
path: atari-amidar/test-*
- config_name: atari-assault
data_files:
- split: train
path: atari-assault/train-*
- split: test
path: atari-assault/test-*
- config_name: atari-asterix
data_files:
- split: train
path: atari-asterix/train-*
- split: test
path: atari-asterix/test-*
- config_name: atari-asteroids
data_files:
- split: train
path: atari-asteroids/train-*
- split: test
path: atari-asteroids/test-*
- config_name: atari-atlantis
data_files:
- split: train
path: atari-atlantis/train-*
- split: test
path: atari-atlantis/test-*
- config_name: atari-bankheist
data_files:
- split: train
path: atari-bankheist/train-*
- split: test
path: atari-bankheist/test-*
- config_name: atari-battlezone
data_files:
- split: train
path: atari-battlezone/train-*
- split: test
path: atari-battlezone/test-*
- config_name: atari-beamrider
data_files:
- split: train
path: atari-beamrider/train-*
- split: test
path: atari-beamrider/test-*
- config_name: atari-berzerk
data_files:
- split: train
path: atari-berzerk/train-*
- split: test
path: atari-berzerk/test-*
- config_name: atari-bowling
data_files:
- split: train
path: atari-bowling/train-*
- split: test
path: atari-bowling/test-*
- config_name: atari-boxing
data_files:
- split: train
path: atari-boxing/train-*
- split: test
path: atari-boxing/test-*
- config_name: atari-breakout
data_files:
- split: train
path: atari-breakout/train-*
- split: test
path: atari-breakout/test-*
- config_name: atari-centipede
data_files:
- split: train
path: atari-centipede/train-*
- split: test
path: atari-centipede/test-*
- config_name: atari-choppercommand
data_files:
- split: train
path: atari-choppercommand/train-*
- split: test
path: atari-choppercommand/test-*
- config_name: atari-crazyclimber
data_files:
- split: train
path: atari-crazyclimber/train-*
- split: test
path: atari-crazyclimber/test-*
- config_name: atari-defender
data_files:
- split: train
path: atari-defender/train-*
- split: test
path: atari-defender/test-*
- config_name: atari-demonattack
data_files:
- split: train
path: atari-demonattack/train-*
- split: test
path: atari-demonattack/test-*
- config_name: atari-doubledunk
data_files:
- split: test
path: atari-doubledunk/test-*
- split: train
path: atari-doubledunk/train-*
- config_name: atari-enduro
data_files:
- split: train
path: atari-enduro/train-*
- split: test
path: atari-enduro/test-*
- config_name: atari-fishingderby
data_files:
- split: train
path: atari-fishingderby/train-*
- split: test
path: atari-fishingderby/test-*
- config_name: atari-freeway
data_files:
- split: train
path: atari-freeway/train-*
- split: test
path: atari-freeway/test-*
- config_name: atari-frostbite
data_files:
- split: train
path: atari-frostbite/train-*
- split: test
path: atari-frostbite/test-*
- config_name: atari-gopher
data_files:
- split: train
path: atari-gopher/train-*
- split: test
path: atari-gopher/test-*
- config_name: atari-gravitar
data_files:
- split: train
path: atari-gravitar/train-*
- split: test
path: atari-gravitar/test-*
- config_name: atari-hero
data_files:
- split: train
path: atari-hero/train-*
- split: test
path: atari-hero/test-*
- config_name: atari-icehockey
data_files:
- split: train
path: atari-icehockey/train-*
- split: test
path: atari-icehockey/test-*
- config_name: atari-jamesbond
data_files:
- split: train
path: atari-jamesbond/train-*
- split: test
path: atari-jamesbond/test-*
- config_name: atari-kangaroo
data_files:
- split: train
path: atari-kangaroo/train-*
- split: test
path: atari-kangaroo/test-*
- config_name: atari-krull
data_files:
- split: train
path: atari-krull/train-*
- split: test
path: atari-krull/test-*
- config_name: atari-kungfumaster
data_files:
- split: train
path: atari-kungfumaster/train-*
- split: test
path: atari-kungfumaster/test-*
- config_name: atari-montezumarevenge
data_files:
- split: train
path: atari-montezumarevenge/train-*
- split: test
path: atari-montezumarevenge/test-*
- config_name: atari-mspacman
data_files:
- split: train
path: atari-mspacman/train-*
- split: test
path: atari-mspacman/test-*
- config_name: atari-namethisgame
data_files:
- split: train
path: atari-namethisgame/train-*
- split: test
path: atari-namethisgame/test-*
- config_name: atari-phoenix
data_files:
- split: train
path: atari-phoenix/train-*
- split: test
path: atari-phoenix/test-*
- config_name: atari-pitfall
data_files:
- split: train
path: atari-pitfall/train-*
- split: test
path: atari-pitfall/test-*
- config_name: atari-pong
data_files:
- split: test
path: atari-pong/test-*
- split: train
path: atari-pong/train-*
- config_name: atari-privateeye
data_files:
- split: test
path: atari-privateeye/test-*
- split: train
path: atari-privateeye/train-*
- config_name: atari-qbert
data_files:
- split: test
path: atari-qbert/test-*
- split: train
path: atari-qbert/train-*
- config_name: atari-riverraid
data_files:
- split: test
path: atari-riverraid/test-*
- split: train
path: atari-riverraid/train-*
- config_name: atari-roadrunner
data_files:
- split: test
path: atari-roadrunner/test-*
- split: train
path: atari-roadrunner/train-*
- config_name: atari-robotank
data_files:
- split: test
path: atari-robotank/test-*
- split: train
path: atari-robotank/train-*
- config_name: atari-seaquest
data_files:
- split: test
path: atari-seaquest/test-*
- split: train
path: atari-seaquest/train-*
- config_name: atari-skiing
data_files:
- split: train
path: atari-skiing/train-*
- split: test
path: atari-skiing/test-*
- config_name: atari-solaris
data_files:
- split: train
path: atari-solaris/train-*
- split: test
path: atari-solaris/test-*
- config_name: atari-spaceinvaders
data_files:
- split: train
path: atari-spaceinvaders/train-*
- split: test
path: atari-spaceinvaders/test-*
- config_name: atari-stargunner
data_files:
- split: train
path: atari-stargunner/train-*
- split: test
path: atari-stargunner/test-*
- config_name: atari-surround
data_files:
- split: train
path: atari-surround/train-*
- split: test
path: atari-surround/test-*
- config_name: atari-tennis
data_files:
- split: train
path: atari-tennis/train-*
- split: test
path: atari-tennis/test-*
- config_name: atari-timepilot
data_files:
- split: train
path: atari-timepilot/train-*
- split: test
path: atari-timepilot/test-*
- config_name: atari-tutankham
data_files:
- split: train
path: atari-tutankham/train-*
- split: test
path: atari-tutankham/test-*
- config_name: atari-upndown
data_files:
- split: train
path: atari-upndown/train-*
- split: test
path: atari-upndown/test-*
- config_name: atari-venture
data_files:
- split: test
path: atari-venture/test-*
- split: train
path: atari-venture/train-*
- config_name: atari-videopinball
data_files:
- split: test
path: atari-videopinball/test-*
- split: train
path: atari-videopinball/train-*
- config_name: atari-wizardofwor
data_files:
- split: test
path: atari-wizardofwor/test-*
- split: train
path: atari-wizardofwor/train-*
- config_name: atari-yarsrevenge
data_files:
- split: test
path: atari-yarsrevenge/test-*
- split: train
path: atari-yarsrevenge/train-*
- config_name: atari-zaxxon
data_files:
- split: test
path: atari-zaxxon/test-*
- split: train
path: atari-zaxxon/train-*
- config_name: babyai-action-obj-door
data_files:
- split: train
path: babyai-action-obj-door/train-*
- split: test
path: babyai-action-obj-door/test-*
- config_name: babyai-blocked-unlock-pickup
data_files:
- split: test
path: babyai-blocked-unlock-pickup/test-*
- split: train
path: babyai-blocked-unlock-pickup/train-*
- config_name: babyai-boss-level
data_files:
- split: test
path: babyai-boss-level/test-*
- split: train
path: babyai-boss-level/train-*
- config_name: babyai-boss-level-no-unlock
data_files:
- split: test
path: babyai-boss-level-no-unlock/test-*
- split: train
path: babyai-boss-level-no-unlock/train-*
- config_name: babyai-find-obj-s5
data_files:
- split: train
path: babyai-find-obj-s5/train-*
- split: test
path: babyai-find-obj-s5/test-*
- config_name: babyai-go-to
data_files:
- split: train
path: babyai-go-to/train-*
- split: test
path: babyai-go-to/test-*
- config_name: babyai-go-to-door
data_files:
- split: train
path: babyai-go-to-door/train-*
- split: test
path: babyai-go-to-door/test-*
- config_name: babyai-go-to-imp-unlock
data_files:
- split: train
path: babyai-go-to-imp-unlock/train-*
- split: test
path: babyai-go-to-imp-unlock/test-*
- config_name: babyai-go-to-local
data_files:
- split: train
path: babyai-go-to-local/train-*
- split: test
path: babyai-go-to-local/test-*
- config_name: babyai-go-to-obj
data_files:
- split: train
path: babyai-go-to-obj/train-*
- split: test
path: babyai-go-to-obj/test-*
- config_name: babyai-go-to-obj-door
data_files:
- split: train
path: babyai-go-to-obj-door/train-*
- split: test
path: babyai-go-to-obj-door/test-*
- config_name: babyai-go-to-red-ball
data_files:
- split: train
path: babyai-go-to-red-ball/train-*
- split: test
path: babyai-go-to-red-ball/test-*
- config_name: babyai-go-to-red-ball-grey
data_files:
- split: train
path: babyai-go-to-red-ball-grey/train-*
- split: test
path: babyai-go-to-red-ball-grey/test-*
- config_name: babyai-go-to-red-ball-no-dists
data_files:
- split: train
path: babyai-go-to-red-ball-no-dists/train-*
- split: test
path: babyai-go-to-red-ball-no-dists/test-*
- config_name: babyai-go-to-red-blue-ball
data_files:
- split: train
path: babyai-go-to-red-blue-ball/train-*
- split: test
path: babyai-go-to-red-blue-ball/test-*
- config_name: babyai-go-to-seq
data_files:
- split: train
path: babyai-go-to-seq/train-*
- split: test
path: babyai-go-to-seq/test-*
- config_name: babyai-key-corridor
data_files:
- split: test
path: babyai-key-corridor/test-*
- split: train
path: babyai-key-corridor/train-*
- config_name: babyai-mini-boss-level
data_files:
- split: test
path: babyai-mini-boss-level/test-*
- split: train
path: babyai-mini-boss-level/train-*
- config_name: babyai-move-two-across-s8n9
data_files:
- split: test
path: babyai-move-two-across-s8n9/test-*
- split: train
path: babyai-move-two-across-s8n9/train-*
- config_name: babyai-one-room-s8
data_files:
- split: test
path: babyai-one-room-s8/test-*
- split: train
path: babyai-one-room-s8/train-*
- config_name: babyai-open
data_files:
- split: test
path: babyai-open/test-*
- split: train
path: babyai-open/train-*
- config_name: babyai-open-door
data_files:
- split: test
path: babyai-open-door/test-*
- split: train
path: babyai-open-door/train-*
- config_name: babyai-open-doors-order-n4
data_files:
- split: test
path: babyai-open-doors-order-n4/test-*
- split: train
path: babyai-open-doors-order-n4/train-*
- config_name: babyai-open-red-door
data_files:
- split: test
path: babyai-open-red-door/test-*
- split: train
path: babyai-open-red-door/train-*
- config_name: babyai-open-two-doors
data_files:
- split: test
path: babyai-open-two-doors/test-*
- split: train
path: babyai-open-two-doors/train-*
- config_name: babyai-pickup
data_files:
- split: test
path: babyai-pickup/test-*
- split: train
path: babyai-pickup/train-*
- config_name: babyai-pickup-above
data_files:
- split: test
path: babyai-pickup-above/test-*
- split: train
path: babyai-pickup-above/train-*
- config_name: babyai-pickup-dist
data_files:
- split: test
path: babyai-pickup-dist/test-*
- split: train
path: babyai-pickup-dist/train-*
- config_name: babyai-pickup-loc
data_files:
- split: test
path: babyai-pickup-loc/test-*
- split: train
path: babyai-pickup-loc/train-*
- config_name: babyai-put-next
data_files:
- split: train
path: babyai-put-next/train-*
- split: test
path: babyai-put-next/test-*
- config_name: babyai-put-next-local
data_files:
- split: train
path: babyai-put-next-local/train-*
- split: test
path: babyai-put-next-local/test-*
- config_name: babyai-synth
data_files:
- split: test
path: babyai-synth/test-*
- split: train
path: babyai-synth/train-*
- config_name: babyai-synth-loc
data_files:
- split: test
path: babyai-synth-loc/test-*
- split: train
path: babyai-synth-loc/train-*
- config_name: babyai-synth-seq
data_files:
- split: test
path: babyai-synth-seq/test-*
- split: train
path: babyai-synth-seq/train-*
- config_name: babyai-unblock-pickup
data_files:
- split: test
path: babyai-unblock-pickup/test-*
- split: train
path: babyai-unblock-pickup/train-*
- config_name: babyai-unlock
data_files:
- split: train
path: babyai-unlock/train-*
- split: test
path: babyai-unlock/test-*
- config_name: babyai-unlock-local
data_files:
- split: test
path: babyai-unlock-local/test-*
- split: train
path: babyai-unlock-local/train-*
- config_name: babyai-unlock-pickup
data_files:
- split: test
path: babyai-unlock-pickup/test-*
- split: train
path: babyai-unlock-pickup/train-*
- config_name: babyai-unlock-to-unlock
data_files:
- split: train
path: babyai-unlock-to-unlock/train-*
- split: test
path: babyai-unlock-to-unlock/test-*
- config_name: conceptual-captions
data_files:
- split: test
path: conceptual-captions/test-*
- split: train
path: conceptual-captions/train-*
- config_name: metaworld-assembly
data_files:
- split: train
path: metaworld-assembly/train-*
- split: test
path: metaworld-assembly/test-*
- config_name: metaworld-basketball
data_files:
- split: train
path: metaworld-basketball/train-*
- split: test
path: metaworld-basketball/test-*
- config_name: metaworld-bin-picking
data_files:
- split: train
path: metaworld-bin-picking/train-*
- split: test
path: metaworld-bin-picking/test-*
- config_name: metaworld-box-close
data_files:
- split: train
path: metaworld-box-close/train-*
- split: test
path: metaworld-box-close/test-*
- config_name: metaworld-button-press
data_files:
- split: train
path: metaworld-button-press/train-*
- split: test
path: metaworld-button-press/test-*
- config_name: metaworld-button-press-topdown
data_files:
- split: train
path: metaworld-button-press-topdown/train-*
- split: test
path: metaworld-button-press-topdown/test-*
- config_name: metaworld-button-press-topdown-wall
data_files:
- split: train
path: metaworld-button-press-topdown-wall/train-*
- split: test
path: metaworld-button-press-topdown-wall/test-*
- config_name: metaworld-button-press-wall
data_files:
- split: train
path: metaworld-button-press-wall/train-*
- split: test
path: metaworld-button-press-wall/test-*
- config_name: metaworld-coffee-button
data_files:
- split: train
path: metaworld-coffee-button/train-*
- split: test
path: metaworld-coffee-button/test-*
- config_name: metaworld-coffee-pull
data_files:
- split: train
path: metaworld-coffee-pull/train-*
- split: test
path: metaworld-coffee-pull/test-*
- config_name: metaworld-coffee-push
data_files:
- split: train
path: metaworld-coffee-push/train-*
- split: test
path: metaworld-coffee-push/test-*
- config_name: metaworld-dial-turn
data_files:
- split: train
path: metaworld-dial-turn/train-*
- split: test
path: metaworld-dial-turn/test-*
- config_name: metaworld-disassemble
data_files:
- split: train
path: metaworld-disassemble/train-*
- split: test
path: metaworld-disassemble/test-*
- config_name: metaworld-door-close
data_files:
- split: train
path: metaworld-door-close/train-*
- split: test
path: metaworld-door-close/test-*
- config_name: metaworld-door-lock
data_files:
- split: train
path: metaworld-door-lock/train-*
- split: test
path: metaworld-door-lock/test-*
- config_name: metaworld-door-open
data_files:
- split: train
path: metaworld-door-open/train-*
- split: test
path: metaworld-door-open/test-*
- config_name: metaworld-door-unlock
data_files:
- split: train
path: metaworld-door-unlock/train-*
- split: test
path: metaworld-door-unlock/test-*
- config_name: metaworld-drawer-close
data_files:
- split: train
path: metaworld-drawer-close/train-*
- split: test
path: metaworld-drawer-close/test-*
- config_name: metaworld-drawer-open
data_files:
- split: train
path: metaworld-drawer-open/train-*
- split: test
path: metaworld-drawer-open/test-*
- config_name: metaworld-faucet-close
data_files:
- split: train
path: metaworld-faucet-close/train-*
- split: test
path: metaworld-faucet-close/test-*
- config_name: metaworld-faucet-open
data_files:
- split: train
path: metaworld-faucet-open/train-*
- split: test
path: metaworld-faucet-open/test-*
- config_name: metaworld-hammer
data_files:
- split: train
path: metaworld-hammer/train-*
- split: test
path: metaworld-hammer/test-*
- config_name: metaworld-hand-insert
data_files:
- split: train
path: metaworld-hand-insert/train-*
- split: test
path: metaworld-hand-insert/test-*
- config_name: metaworld-handle-press
data_files:
- split: train
path: metaworld-handle-press/train-*
- split: test
path: metaworld-handle-press/test-*
- config_name: metaworld-handle-press-side
data_files:
- split: train
path: metaworld-handle-press-side/train-*
- split: test
path: metaworld-handle-press-side/test-*
- config_name: metaworld-handle-pull
data_files:
- split: train
path: metaworld-handle-pull/train-*
- split: test
path: metaworld-handle-pull/test-*
- config_name: metaworld-handle-pull-side
data_files:
- split: train
path: metaworld-handle-pull-side/train-*
- split: test
path: metaworld-handle-pull-side/test-*
- config_name: metaworld-lever-pull
data_files:
- split: train
path: metaworld-lever-pull/train-*
- split: test
path: metaworld-lever-pull/test-*
- config_name: metaworld-peg-insert-side
data_files:
- split: train
path: metaworld-peg-insert-side/train-*
- split: test
path: metaworld-peg-insert-side/test-*
- config_name: metaworld-peg-unplug-side
data_files:
- split: train
path: metaworld-peg-unplug-side/train-*
- split: test
path: metaworld-peg-unplug-side/test-*
- config_name: metaworld-pick-out-of-hole
data_files:
- split: train
path: metaworld-pick-out-of-hole/train-*
- split: test
path: metaworld-pick-out-of-hole/test-*
- config_name: metaworld-pick-place
data_files:
- split: train
path: metaworld-pick-place/train-*
- split: test
path: metaworld-pick-place/test-*
- config_name: metaworld-pick-place-wall
data_files:
- split: train
path: metaworld-pick-place-wall/train-*
- split: test
path: metaworld-pick-place-wall/test-*
- config_name: metaworld-plate-slide
data_files:
- split: train
path: metaworld-plate-slide/train-*
- split: test
path: metaworld-plate-slide/test-*
- config_name: metaworld-plate-slide-back
data_files:
- split: train
path: metaworld-plate-slide-back/train-*
- split: test
path: metaworld-plate-slide-back/test-*
- config_name: metaworld-plate-slide-back-side
data_files:
- split: train
path: metaworld-plate-slide-back-side/train-*
- split: test
path: metaworld-plate-slide-back-side/test-*
- config_name: metaworld-plate-slide-side
data_files:
- split: train
path: metaworld-plate-slide-side/train-*
- split: test
path: metaworld-plate-slide-side/test-*
- config_name: metaworld-push
data_files:
- split: train
path: metaworld-push/train-*
- split: test
path: metaworld-push/test-*
- config_name: metaworld-push-back
data_files:
- split: train
path: metaworld-push-back/train-*
- split: test
path: metaworld-push-back/test-*
- config_name: metaworld-push-wall
data_files:
- split: train
path: metaworld-push-wall/train-*
- split: test
path: metaworld-push-wall/test-*
- config_name: metaworld-reach
data_files:
- split: train
path: metaworld-reach/train-*
- split: test
path: metaworld-reach/test-*
- config_name: metaworld-reach-wall
data_files:
- split: train
path: metaworld-reach-wall/train-*
- split: test
path: metaworld-reach-wall/test-*
- config_name: metaworld-shelf-place
data_files:
- split: train
path: metaworld-shelf-place/train-*
- split: test
path: metaworld-shelf-place/test-*
- config_name: metaworld-soccer
data_files:
- split: train
path: metaworld-soccer/train-*
- split: test
path: metaworld-soccer/test-*
- config_name: metaworld-stick-pull
data_files:
- split: train
path: metaworld-stick-pull/train-*
- split: test
path: metaworld-stick-pull/test-*
- config_name: metaworld-stick-push
data_files:
- split: train
path: metaworld-stick-push/train-*
- split: test
path: metaworld-stick-push/test-*
- config_name: metaworld-sweep
data_files:
- split: train
path: metaworld-sweep/train-*
- split: test
path: metaworld-sweep/test-*
- config_name: metaworld-sweep-into
data_files:
- split: train
path: metaworld-sweep-into/train-*
- split: test
path: metaworld-sweep-into/test-*
- config_name: metaworld-window-close
data_files:
- split: train
path: metaworld-window-close/train-*
- split: test
path: metaworld-window-close/test-*
- config_name: metaworld-window-open
data_files:
- split: train
path: metaworld-window-open/train-*
- split: test
path: metaworld-window-open/test-*
- config_name: mujoco-ant
data_files:
- split: train
path: mujoco-ant/train-*
- split: test
path: mujoco-ant/test-*
- config_name: mujoco-doublependulum
data_files:
- split: train
path: mujoco-doublependulum/train-*
- split: test
path: mujoco-doublependulum/test-*
- config_name: mujoco-halfcheetah
data_files:
- split: train
path: mujoco-halfcheetah/train-*
- split: test
path: mujoco-halfcheetah/test-*
- config_name: mujoco-hopper
data_files:
- split: train
path: mujoco-hopper/train-*
- split: test
path: mujoco-hopper/test-*
- config_name: mujoco-humanoid
data_files:
- split: train
path: mujoco-humanoid/train-*
- split: test
path: mujoco-humanoid/test-*
- config_name: mujoco-pendulum
data_files:
- split: train
path: mujoco-pendulum/train-*
- split: test
path: mujoco-pendulum/test-*
- config_name: mujoco-pusher
data_files:
- split: train
path: mujoco-pusher/train-*
- split: test
path: mujoco-pusher/test-*
- config_name: mujoco-reacher
data_files:
- split: train
path: mujoco-reacher/train-*
- split: test
path: mujoco-reacher/test-*
- config_name: mujoco-standup
data_files:
- split: train
path: mujoco-standup/train-*
- split: test
path: mujoco-standup/test-*
- config_name: mujoco-swimmer
data_files:
- split: train
path: mujoco-swimmer/train-*
- split: test
path: mujoco-swimmer/test-*
- config_name: mujoco-walker
data_files:
- split: train
path: mujoco-walker/train-*
- split: test
path: mujoco-walker/test-*
- config_name: ok-vqa
data_files:
- split: train
path: ok-vqa/train-*
- split: test
path: ok-vqa/test-*
- config_name: oscar
data_files:
- split: train
path: oscar/train-*
- split: test
path: oscar/test-*
- config_name: wikipedia
data_files:
- split: train
path: wikipedia/train-*
- split: test
path: wikipedia/test-*
---
# Dataset Card for "jat-dataset-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
UCLNLP/adversarial_qa | UCLNLP | 2023-12-21T14:20:00Z | 490 | 39 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2002.00293",
"arxiv:1606.05250",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: adversarialqa
pretty_name: adversarialQA
dataset_info:
- config_name: adversarialQA
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 27858686
num_examples: 30000
- name: validation
num_bytes: 2757092
num_examples: 3000
- name: test
num_bytes: 2919479
num_examples: 3000
download_size: 5301049
dataset_size: 33535257
- config_name: dbert
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 9345521
num_examples: 10000
- name: validation
num_bytes: 918156
num_examples: 1000
- name: test
num_bytes: 971290
num_examples: 1000
download_size: 2689032
dataset_size: 11234967
- config_name: dbidaf
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 9282482
num_examples: 10000
- name: validation
num_bytes: 917907
num_examples: 1000
- name: test
num_bytes: 946947
num_examples: 1000
download_size: 2721341
dataset_size: 11147336
- config_name: droberta
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 9270683
num_examples: 10000
- name: validation
num_bytes: 925029
num_examples: 1000
- name: test
num_bytes: 1005242
num_examples: 1000
download_size: 2815452
dataset_size: 11200954
configs:
- config_name: adversarialQA
data_files:
- split: train
path: adversarialQA/train-*
- split: validation
path: adversarialQA/validation-*
- split: test
path: adversarialQA/test-*
- config_name: dbert
data_files:
- split: train
path: dbert/train-*
- split: validation
path: dbert/validation-*
- split: test
path: dbert/test-*
- config_name: dbidaf
data_files:
- split: train
path: dbidaf/train-*
- split: validation
path: dbidaf/validation-*
- split: test
path: dbidaf/test-*
- config_name: droberta
data_files:
- split: train
path: droberta/train-*
- split: validation
path: droberta/validation-*
- split: test
path: droberta/test-*
train-eval-index:
- config: adversarialQA
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
---
# Dataset Card for adversarialQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [adversarialQA homepage](https://adversarialqa.github.io/)
- **Repository:** [adversarialQA repository](https://github.com/maxbartolo/adversarialQA)
- **Paper:** [Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension](https://arxiv.org/abs/2002.00293)
- **Leaderboard:** [Dynabench QA Round 1 Leaderboard](https://dynabench.org/tasks/2#overall)
- **Point of Contact:** [Max Bartolo]([email protected])
### Dataset Summary
We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.
We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.
The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.
### Supported Tasks and Leaderboards
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1). The [RoBERTa-Large](https://huggingface.co/roberta-large) model trained on all the data combined with [SQuAD](https://arxiv.org/abs/1606.05250) currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "Oxygen",
"paragraphs": [
{
"context": "Among the most important classes of organic compounds that contain oxygen are (where \"R\" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.",
"qas": [
{
"id": "22bbe104aa72aa9b511dd53237deb11afa14d6e3",
"question": "In addition to having oxygen, what do alcohols, ethers and esters have in common, according to the article?",
"answers": [
{
"answer_start": 36,
"text": "organic compounds"
}
]
},
{
"id": "4240a8e708c703796347a3702cf1463eed05584a",
"question": "What letter does the abbreviation for acid anhydrides both begin and end in?",
"answers": [
{
"answer_start": 244,
"text": "R"
}
]
},
{
"id": "0681a0a5ec852ec6920d6a30f7ef65dced493366",
"question": "Which of the organic compounds, in the article, contains nitrogen?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
},
{
"id": "2990efe1a56ccf81938fa5e18104f7d3803069fb",
"question": "Which of the important classes of organic compounds, in the article, has a number in its abbreviation?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: the title of the Wikipedia page from which the context is sourced
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
Note that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the [website](https://dynabench.org/tasks/2#1).
### Data Splits
The dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.
## Dataset Creation
### Curation Rationale
This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.
### Annotations
#### Annotation process
The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.
#### Who are the annotators?
The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).
### Licensing Information
This dataset is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{bartolo2020beat,
author = {Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus},
title = {Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension},
journal = {Transactions of the Association for Computational Linguistics},
volume = {8},
number = {},
pages = {662-678},
year = {2020},
doi = {10.1162/tacl\_a\_00338},
URL = { https://doi.org/10.1162/tacl_a_00338 },
eprint = { https://doi.org/10.1162/tacl_a_00338 },
abstract = { Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1). }
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.