datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
meta-math/MetaMathQA | meta-math | 2023-12-21T01:35:53Z | 8,037 | 382 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2309.12284",
"region:us",
"math",
"math-qa"
] | [] | 2023-09-21T17:22:46Z | null | ---
tags:
- math
- math-qa
license: mit
---
View the project page:
https://meta-math.github.io/
see our paper at https://arxiv.org/abs/2309.12284
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and changing the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
More training details and scripts can be seen at [https://github.com/meta-math/MetaMath](https://github.com/meta-math/MetaMath).
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
There is another interesting repo about Arithmo-Mistral-7B at [https://huggingface.co/akjindal53244/Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B), where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details.
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
We encourage anyone to use our MetaMathQA datasets. We are very happy to see the following models trained by MetaMathQA achieve a very promising performance!
OpenChat-3.5 (https://huggingface.co/openchat/openchat_3.5)
CausalLM (https://huggingface.co/CausalLM/14B)
zephyr (https://huggingface.co/qblocks/zephyr-7b-alpha_metamathqa)
Ziya2 (https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base)
# Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` |
llm-book/livedoor-news-corpus | llm-book | 2023-12-12T02:19:43Z | 209 | 4 | [
"task_categories:summarization",
"language:ja",
"size_categories:1K<n<10K",
"region:us",
"news"
] | [
"summarization"
] | 2023-06-21T07:16:52Z | 1 | ---
task_categories:
- summarization
language:
- ja
tags:
- news
pretty_name: livedoor-news-corpus
size_categories:
- 1K<n<10K
---
# Dataset Card for llm-book/ner-wikinews-dataset
書籍『大規模言語モデル入門』で使用する、株式会社ロンウイットが提供する「livedoorニュースコーパス」によるデータセットです。
[オリジナルのサイト](https://www.rondhuit.com/download.html)と同じものを使用しています。
本コーパスは、NHN Japan株式会社が運営する「livedoor ニュース」のうち、下記のクリエイティブ・コモンズライセンスが適用されるニュース記事を収集し、可能な限りHTMLタグを取り除いて作成したものです。
### Licence
Attribution-NoDerivs 2.1 Japan (CC BY-ND 2.1 JP) License |
argilla/ultrafeedback-binarized-preferences-cleaned | argilla | 2023-12-11T14:22:19Z | 1,102 | 141 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dpo",
"preference",
"ultrafeedback"
] | [
"text-generation"
] | 2023-12-05T11:07:34Z | null | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: UltraFeedback Binarized Preferences Cleaned
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen-rating
dtype: float64
- name: chosen-model
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected-rating
dtype: float64
- name: rejected-model
dtype: string
splits:
- name: train
num_bytes: 284937773
num_examples: 60917
download_size: 143257393
dataset_size: 284937773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- dpo
- preference
- ultrafeedback
---
# UltraFeedback - Binarized using the Average of Preference Ratings (Cleaned)
This dataset represents a new iteration on top of [`argilla/ultrafeedback-binarized-preferences`](https://huggingface.co/argilla/ultrafeedback-binarized-preferences),
and is the **recommended and preferred dataset by Argilla to use from now on when fine-tuning on UltraFeedback**.
Read more about Argilla's approach towards UltraFeedback binarization at [`argilla/ultrafeedback-binarized-preferences/README.md`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences/blob/main/README.md).
## Differences with `argilla/ultrafeedback-binarized-preferences`
Thanks to the recent issue identified by [AllenAI](https://huggingface.co/allenai) related to the TruthfulQA contamination within the
original UltraFeedback dataset due to some prompts being reused from the TruthfulQA dataset (used for benchmarking
in the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) from HuggingFace H4), we also decided
to follow AllenAI's advice and remove those from the UltraFeedback dataset that we binarized using a completely different approach, which
implied using the average of the preference ratings rather than the critique overall score, as
[`HuggingFaceH4/ultrafeedback_binarized`](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) did.
Besides that, we also saw that not only the rows with the `source=truthful_qa` were contamined (for obvious reasons), but also some
coming from ShareGPT, so we also removed those doing a left join with both subsets from the [`truthful_qa`](https://huggingface.co/datasets/truthful_qa) dataset.
Additionally, we also modified the formatting to be aligned with both [`HuggingFaceH4/ultrafeedback_binarized`](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized),
and [`allenai/ultrafeedback_binarized_cleaned`](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) in order to ease
the integration within the [`huggingface/alignment-handbook`](https://github.com/huggingface/alignment-handbook) so that the formatting is standardized.
## Reproduce
<a target="_blank" href="https://colab.research.google.com/drive/1XR9P1St4yTNY0tjti_tIjm-yzP5Bfqc0?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
To reproduce the data processing combining both our approach and the suggestions from HuggingFace H4 w.r.t. the formatting and the ones from AllenAI to
remove the TruthfulQA contamination, feel free to run the attached Colab Notebook or just view it at [`notebook.ipynb`](./notebook.ipynb) within this repository.
From Argilla we encourage anyone out there to play around, investigate, and experiment with the data, and we firmly believe on open sourcing what we do, as
ourselves, as well as the whole community, benefit a lot from open source and we also want to give back.
## Citation
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset: https://huggingface.co/datasets/openbmb/UltraFeedback
Additionally, you may also want to cite our work with Notus 7B, which lead the curation of the UltraFeedback dataset:
```bibtex
@misc{notus2023,
author = {Alvaro Bartolome and Gabriel Martin and Daniel Vila},
title = {Notus},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/argilla-io/notus}}
}
```
> Alphabetically ordered by last name due to equal contribution. |
ise-uiuc/Magicoder-OSS-Instruct-75K | ise-uiuc | 2023-12-04T10:35:04Z | 510 | 146 | [
"task_categories:text-generation",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"conversational"
] | 2023-12-03T20:04:53Z | null | ---
license: mit
task_categories:
- text-generation
- conversational
size_categories:
- 10K<n<100K
---
This is the **OSS-Instruct** dataset generated by `gpt-3.5-turbo-1106` developed by OpenAI. Please pay attention to OpenAI's usage policy when adopting this dataset: https://openai.com/policies/usage-policies.
|
FremyCompany/AGCT-Dataset | FremyCompany | 2023-11-28T21:32:26Z | 71 | 16 | [
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"biology",
"medical"
] | [] | 2023-06-01T13:37:33Z | 1 | ---
language:
- en
pretty_name: Automatic Glossary of Clinical Terminology (v2023)
license: other
tags:
- biology
- medical
size_categories:
- 100K<n<1M
---
# Automatic Glossary of Clinical Terminology (v2023)
This dataset contains 422,070 short, computer-generated definitions for SnomedCT concepts, covering various domains such as diseases, procedures, drugs, and anatomy. To do so, we prompted the OpenAI Turbo model, a variant of GPT 3.5, using a high-quality verbalization of the SnomedCT relationships of the to-be-defined concept.

<div class="not-prose">
<img align="right" alt="figure-quality-graph-1.png" src="https://s3.amazonaws.com/moonup/production/uploads/5f04e8865d08220171a0ad3f/629gp8GJt_5STt-4fryMg.png" width="256" />
<!--<img align="right" alt="figure-quality-graph-2s.png" src="https://s3.amazonaws.com/moonup/production/uploads/5f04e8865d08220171a0ad3f/Ki4k8jt_YqDGgKA2sqkJy.png" width="160" />-->
</div>
## Quality Control
**IMPORTANT:** Following a quality control, we report that the definitions include a majority of factual, insightful, and fluent definitions. However, about 30% of the definitions generated by this procedure do not meet the high standards required for presentation to users, or for usage by machine learning models in scenarios requiring reasoning, due to their imperfect quality. However, more than 95% of the definitions appear useful for biomedical model pre-training. We therefore release this dataset for building retrieval-based systems, and evaluate large biomedical language models on the definition-generation task (and eventually for low-rank finetuning of existing language models).
<br clear="all" />
## License
The license for this work is subject to both [SnomedCT](https://www.nlm.nih.gov/healthit/snomedct/snomed_licensing.html) and [OpenAI API](https://openai.com/policies/terms-of-use) agreements. We strongly recommend checking those licenses before making use of this dataset.
## Citation
If you use this dataset, please cite the following work: [AGCT @ BioNLP 2023](https://aclanthology.org/2023.bionlp-1.23/)
```
@inproceedings{remy-etal-2023-automatic,
title = "Automatic Glossary of Clinical Terminology: a Large-Scale Dictionary of Biomedical Definitions Generated from Ontological Knowledge",
author = "Remy, Fran{\c{c}}ois and
Demuynck, Kris and
Demeester, Thomas",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.23",
doi = "10.18653/v1/2023.bionlp-1.23",
pages = "265--272",
abstract = "Background: More than 400.000 biomedical concepts and some of their relationships are contained in SnomedCT, a comprehensive biomedical ontology. However, their concept names are not always readily interpretable by non-experts, or patients looking at their own electronic health records (EHR). Clear definitions or descriptions in understandable language or often not available. Therefore, generating human-readable definitions for biomedical concepts might help make the information they encode more accessible and understandable to a wider public. Objective: In this article, we introduce the Automatic Glossary of Clinical Terminology (AGCT), a large-scale biomedical dictionary of clinical concepts generated using high-quality information extracted from the biomedical knowledge contained in SnomedCT.Methods: We generate a novel definition for every SnomedCT concept, after prompting the OpenAI Turbo model, a variant of GPT 3.5, using a high-quality verbalization of the SnomedCT relationships of the to-be-defined concept. A significant subset of the generated definitions was subsequently evaluated by NLP researchers with biomedical expertise on 5-point scales along the following three axes: factuality, insight, and fluency. Results: AGCT contains 422,070 computer-generated definitions for SnomedCT concepts, covering various domains such as diseases, procedures, drugs, and anatomy. The average length of the definitions is 49 words. The definitions were assigned average scores of over 4.5 out of 5 on all three axes, indicating a majority of factual, insightful, and fluent definitions. Conclusion: AGCT is a novel and valuable resource for biomedical tasks that require human-readable definitions for SnomedCT concepts. It can also serve as a base for developing robust biomedical retrieval models or other applications that leverage natural language understanding of biomedical knowledge.",
}
``` |
ckandemir/amazon-products | ckandemir | 2023-11-21T09:46:07Z | 176 | 10 | [
"task_categories:image-classification",
"task_categories:image-to-text",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification",
"image-to-text"
] | 2023-11-01T19:03:06Z | 2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: Product Name
dtype: string
- name: Category
dtype: string
- name: Description
dtype: string
- name: Selling Price
dtype: string
- name: Product Specification
dtype: string
- name: Image
dtype: string
splits:
- name: train
num_bytes: 12542887
num_examples: 23993
- name: test
num_bytes: 3499375
num_examples: 6665
- name: eval
num_bytes: 1376174
num_examples: 2666
download_size: 6391314
dataset_size: 17418436
license: apache-2.0
task_categories:
- image-classification
- image-to-text
language:
- en
size_categories:
- 10K<n<100K
---
## Dataset Creation and Processing Overview
This dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing.
### Data Loading and Initial Cleaning
- **Source**: Loaded from the Hugging Face dataset repository [bprateek/amazon_product_description](https://huggingface.co/datasets/bprateek/amazon_product_description).
- **Conversion to Pandas DataFrame**: For ease of data manipulation.
- **Null Value Removal**: Rows with null values in the 'About Product' column were discarded.
### Data Cleaning and NLP Processing
- **Sentence Extraction**: 'About Product' descriptions were split into sentences, identifying common phrases.
- **Emoji and Special Character Removal**: A regex function removed these elements from the product descriptions.
- **Common Phrase Elimination**: A function was used to strip common phrases from each product description.
- **Improving Writing Standards**: Adjusted capitalization, punctuation, and replaced '&' with 'and' for better readability and formalization.
### Sentence Similarity Analysis
- **Model Application**: The pre-trained Sentence Transformer model 'all-MiniLM-L6-v2' was used.
- **Sentence Comparison**: Identified the most similar sentence to each product name within the cleaned product descriptions.
### Dataset Refinement
- **Column Selection**: Retained relevant columns for final dataset.
- **Image URL Processing**: Split multiple image URLs into individual URLs, removing specific unwanted URLs.
### Image Validation
- **Image URL Validation**: Implemented a function to verify the validity of each image URL.
- **Filtering Valid Images**: Retained only rows with valid image URLs.
### Dataset Splitting for Machine Learning
- **Creation of Train, Test, and Eval Sets**: Used scikit-learn's `train_test_split` for dataset division.
For further details or to contribute to enhancing the dataset card, please refer to the [Hugging Face Dataset Card Contribution Guide](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards). |
InfImagine/FakeImageDataset | InfImagine | 2023-11-20T05:37:00Z | 453 | 23 | [
"license:apache-2.0",
"modality:image",
"arxiv:2304.13023",
"region:us"
] | [] | 2023-07-07T04:08:51Z | 1 | ---
license: apache-2.0
---
# Fake Image Dataset
Fake Image Dataset is now open-sourced at [huggingface (InfImagine Organization)](https://huggingface.co/datasets/InfImagine/FakeImageDataset/tree/main/ImageData/train) and [openxlab](https://openxlab.org.cn/datasets/whlzy/FakeImageDataset/tree/main). ↗ It consists of two folders, *ImageData* and *MetaData*. *ImageData* contains the compressed packages of the Fake Image Dataset, while *MetaData* contains the labeling information of the corresponding data indicating whether they are real or fake.
Sentry-Image is now open-sourced at [Sentry-Image (github repository)](https://github.com/Inf-imagine/Sentry) which provides the SOTA fake image detection models in [Sentry-Image Leaderboard](http://sentry.infimagine.com/) pretraining in [Fake Image Dataset](https://huggingface.co/datasets/InfImagine/FakeImageDataset/tree/main/ImageData/train) to detect whether the image provided is an AI-generated or real image.
## Why we need [Fake Image Dataset](https://huggingface.co/datasets/InfImagine/FakeImageDataset/tree/main/ImageData/train) and [Sentry-Image](http://sentry.infimagine.com/)?
* 🧐 Recent [study](https://arxiv.org/abs/2304.13023) have shown that humans struggle significantly to distinguish real photos from AI-generated ones, with a misclassification rate of **38.7%**.
* 🤗 To help people confirm whether the images they see are real images or AI-generated images, we launched the Sentry-Image project.
* 💻 Sentry-Image is an open source project which provides the SOTA fake image detection models in [Sentry-Image Leaderboard](http://sentry.infimagine.com/) to detect whether the image provided is an AI-generated or real image.
# Dataset card for Fake Image Dataset
## Dataset Description
* **Homepage:** [Sentry-Image](http://sentry.infimagine.com/)
* **Paper:** [https://arxiv.org/pdf/2304.13023.pdf](https://arxiv.org/pdf/2304.13023.pdf)
* **Point of Contact:** [[email protected]](mailto:[email protected])
## How to Download
You can use following codes to download the dataset:
```shell
git lfs install
git clone https://huggingface.co/datasets/InfImagine/FakeImageDataset
```
You can use following codes to extract the files in each subfolder (take the *IF-CC95K* subfolder in ImageData/val/IF-CC95K as an example):
```shell
cat IF-CC95K.tar.gz.* > IF-CC95K.tar.gz
tar -xvf IF-CC95K.tar.gz
```
## Dataset Summary
FakeImageDataset was created to serve as an large-scale dataset for the pretraining of detecting fake images.
It was built on StableDiffusion v1.5, IF and StyleGAN3.
## Supported Tasks and Leaderboards
FakeImageDataset is intended to be primarly used as a pretraining dataset for detecting fake images.
## Sub Dataset
### Training Dataset (Fake2M)
| Dataset | SD-V1.5Real-dpms-25 | IF-V1.0-dpms++-25 | StyleGAN3 |
| :----------- | :-----------: | :-----------: | :-----------: |
| Generator | Diffusion | Diffusion | GAN |
| Numbers | 1M | 1M | 87K |
| Resolution | 512 | 256 | (>=512) |
| Caption | CC3M-Train | CC3M-Train | - |
| ImageData Path | ImageData/train/SDv15R-CC1M | ImageData/train/IFv1-CC1M | ImageData/train/stylegan3-80K |
| MetaData Path | MetaData/train/SDv15R-CC1M.csv | MetaData/train/IF-CC1M.csv | MetaData/train/stylegan3-80K.csv |
### Validation Dataset (MPBench)
| Dataset | SDv15 | SDv21 | IF | Cogview2 | StyleGAN3 | Midjourneyv5 |
| :---------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |
| Generator | Diffusion | Diffusion | Diffusion | AR | GAN | - |
| Numbers | 30K | 15K | 95K | 22K | 60K | 5K |
| Resolution | 512 | 512 | 256 | 480 | (>=512) | (>=512) |
| Caption | CC15K-val | CC15K-val | CC15K-val | CC15K-val | - | - |
| ImageData Path | ImageData/val/SDv15-CC30K | ImageData/val/SDv21-CC15K | ImageData/val/IF-CC95K | ImageData/val/cogview2-22K | ImageData/val/stylegan3-60K | ImageData/val/Midjourneyv5-5K|
| MetaData Path | MetaData/val/SDv15-CC30K.csv| MetaData/val/SDv21-CC15K.csv | MetaData/val/IF-CC95K.csv | MetaData/val/cogview2-22K.csv | MetaData/val/stylegan3-60K.csv | MetaData/val/Midjourneyv5-5K.csv |
# News
* [2023/07] We open source the [Sentry-Image repository](https://github.com/Inf-imagine/Sentry) and [Sentry-Image Demo & Leaderboard](http://sentry.infimagine.com/).
* [2023/07] We open source the [Sentry-Image dataset](https://huggingface.co/datasets/InfImagine/FakeImageDataset).
Stay tuned for this project! Feel free to contact [[email protected]]([email protected])! 😆
# License
This project is open-sourced under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0). These weights and datasets are fully open for academic research and can be used for commercial purposes with official written permission. If you find our open-source models and datasets useful for your business, we welcome your donation to support the development of the next-generation Sentry-Image model. Please contact [[email protected]]([email protected]) for commercial licensing and donation inquiries.
# Citation
The code and model in this repository is mostly developed for or derived from the paper below. Please cite it if you find the repository helpful.
```
@misc{sentry-image-leaderboard,
title = {Sentry-Image Leaderboard},
author = {Zeyu Lu, Di Huang, Chunli Zhang, Chengyue Wu, Xihui Liu, Lei Bai, Wanli Ouyang},
year = {2023},
publisher = {InfImagine, Shanghai AI Laboratory},
howpublished = "\url{https://github.com/Inf-imagine/Sentry}"
},
@misc{lu2023seeing,
title = {Seeing is not always believing: Benchmarking Human and Model Perception of AI-Generated Images},
author = {Zeyu Lu, Di Huang, Lei Bai, Jingjing Qu, Chengyue Wu, Xihui Liu, Wanli Ouyang},
year = {2023},
eprint = {2304.13023},
archivePrefix = {arXiv},
primaryClass = {cs.AI}
}
``` |
defunct-datasets/amazon_reviews_multi | defunct-datasets | 2023-11-02T14:52:21Z | 1,574 | 96 | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:multilingual",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ja",
"language:zh",
"license:other",
"size_categories:100K<n<1M",
"arxiv:2010.02573",
"region:us"
] | [
"summarization",
"text-generation",
"fill-mask",
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- en
- es
- fr
- ja
- zh
license:
- other
multilinguality:
- monolingual
- multilingual
size_categories:
- 100K<n<1M
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
- text-generation
- fill-mask
- text-classification
task_ids:
- text-scoring
- language-modeling
- masked-language-modeling
- sentiment-classification
- sentiment-scoring
- topic-classification
paperswithcode_id: null
pretty_name: The Multilingual Amazon Reviews Corpus
dataset_info:
- config_name: all_languages
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 364405048
num_examples: 1200000
- name: validation
num_bytes: 9047533
num_examples: 30000
- name: test
num_bytes: 9099141
num_examples: 30000
download_size: 640320386
dataset_size: 382551722
- config_name: de
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 64485678
num_examples: 200000
- name: validation
num_bytes: 1605727
num_examples: 5000
- name: test
num_bytes: 1611044
num_examples: 5000
download_size: 94802490
dataset_size: 67702449
- config_name: en
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 58601089
num_examples: 200000
- name: validation
num_bytes: 1474672
num_examples: 5000
- name: test
num_bytes: 1460565
num_examples: 5000
download_size: 86094112
dataset_size: 61536326
- config_name: es
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 52375658
num_examples: 200000
- name: validation
num_bytes: 1303958
num_examples: 5000
- name: test
num_bytes: 1312347
num_examples: 5000
download_size: 81345461
dataset_size: 54991963
- config_name: fr
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 54593565
num_examples: 200000
- name: validation
num_bytes: 1340763
num_examples: 5000
- name: test
num_bytes: 1364510
num_examples: 5000
download_size: 85917293
dataset_size: 57298838
- config_name: ja
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 82401390
num_examples: 200000
- name: validation
num_bytes: 2035391
num_examples: 5000
- name: test
num_bytes: 2048048
num_examples: 5000
download_size: 177773783
dataset_size: 86484829
- config_name: zh
features:
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: reviewer_id
dtype: string
- name: stars
dtype: int32
- name: review_body
dtype: string
- name: review_title
dtype: string
- name: language
dtype: string
- name: product_category
dtype: string
splits:
- name: train
num_bytes: 51947668
num_examples: 200000
- name: validation
num_bytes: 1287106
num_examples: 5000
- name: test
num_bytes: 1302711
num_examples: 5000
download_size: 114387247
dataset_size: 54537485
config_names:
- all_languages
- de
- en
- es
- fr
- ja
- zh
viewer: false
---
# Dataset Card for The Multilingual Amazon Reviews Corpus
## Table of Contents
- [Dataset Card for amazon_reviews_multi](#dataset-card-for-amazon_reviews_multi)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [plain_text](#plain_text)
- [Data Fields](#data-fields)
- [plain_text](#plain_text-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Webpage:** https://registry.opendata.aws/amazon-reviews-ml/
- **Paper:** https://arxiv.org/abs/2010.02573
- **Point of Contact:** [[email protected]](mailto:[email protected])
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> Dataset "amazon_reviews_multi" is defunct and no longer accessible due to the decision of data providers.</p>
</div>
We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.
For each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long.
Note that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from amazon.de are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish.
## Dataset Structure
### Data Instances
Each data instance corresponds to a review. The original JSON for an instance looks like so (German example):
```json
{
"review_id": "de_0784695",
"product_id": "product_de_0572654",
"reviewer_id": "reviewer_de_0645436",
"stars": "1",
"review_body": "Leider, leider nach einmal waschen ausgeblichen . Es sieht super h\u00fcbsch aus , nur leider stinkt es ganz schrecklich und ein Waschgang in der Maschine ist notwendig ! Nach einem mal waschen sah es aus als w\u00e4re es 10 Jahre alt und hatte 1000 e von Waschg\u00e4ngen hinter sich :( echt schade !",
"review_title": "Leider nicht zu empfehlen",
"language": "de",
"product_category": "home"
}
```
### Data Fields
- `review_id`: A string identifier of the review.
- `product_id`: A string identifier of the product being reviewed.
- `reviewer_id`: A string identifier of the reviewer.
- `stars`: An int between 1-5 indicating the number of stars.
- `review_body`: The text body of the review.
- `review_title`: The text title of the review.
- `language`: The string identifier of the review language.
- `product_category`: String representation of the product's category.
### Data Splits
Each language configuration comes with its own `train`, `validation`, and `test` splits. The `all_languages` split
is simply a concatenation of the corresponding split across all languages. That is, the `train` split for
`all_languages` is a concatenation of the `train` splits for each of the languages and likewise for `validation` and
`test`.
## Dataset Creation
### Curation Rationale
The dataset is motivated by the desire to advance sentiment analysis and text classification in other (non-English)
languages.
### Source Data
#### Initial Data Collection and Normalization
The authors gathered the reviews from the marketplaces in the US, Japan, Germany, France, Spain, and China for the
English, Japanese, German, French, Spanish, and Chinese languages, respectively. They then ensured the correct
language by applying a language detection algorithm, only retaining those of the target language. In a random sample
of the resulting reviews, the authors observed a small percentage of target languages that were incorrectly filtered
out and a very few mismatched languages that were incorrectly retained.
#### Who are the source language producers?
The original text comes from Amazon customers reviewing products on the marketplace across a variety of product
categories.
### Annotations
#### Annotation process
Each of the fields included are submitted by the user with the review or otherwise associated with the review. No
manual or machine-driven annotation was necessary.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
According to the original dataset [license terms](https://docs.opendata.aws/amazon-reviews-ml/license.txt), you may not:
- link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or
- attempt to determine the identity of the author of any content in the Reviews Corpus.
If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically
terminate without prejudice to any of the other rights or remedies Amazon may have.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is part of an effort to encourage text classification research in languages other than English. Such
work increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of
the languages included here is relatively high resource and well studied.
### Discussion of Biases
The dataset contains only reviews from verified purchases (as described in the paper, section 2.1), and the reviews
should conform the [Amazon Community Guidelines](https://www.amazon.com/gp/help/customer/display.html?nodeId=GLHXEX85MENUE4XF).
### Other Known Limitations
The dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for
purposes of classification, but some types of language may be over or underrepresented relative to the original
distribution of reviews to achieve this balance.
## Additional Information
### Dataset Curators
Published by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon.
### Licensing Information
Amazon has licensed this dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here:
https://docs.opendata.aws/amazon-reviews-ml/license.txt
By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the [Amazon.com Conditions of Use](https://www.amazon.com/gp/help/customer/display.html/ref=footer_cou?ie=UTF8&nodeId=508088) and you agree to be bound by them, with the following additional conditions:
In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.
### Citation Information
Please cite the following paper (arXiv) if you found this dataset useful:
Phillip Keung, Yichao Lu, György Szarvas and Noah A. Smith. “The Multilingual Amazon Reviews Corpus.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020.
```
@inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
Malikeh1375/medical-question-answering-datasets | Malikeh1375 | 2023-11-02T03:13:38Z | 1,129 | 47 | [
"task_categories:question-answering",
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"medical",
"clinical",
"healthcare"
] | [
"question-answering"
] | 2023-10-27T16:21:07Z | 2 | ---
language:
- en
task_categories:
- question-answering
tags:
- medical
- clinical
- healthcare
dataset_info:
- config_name: all-processed
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 276980695
num_examples: 246678
download_size: 0
dataset_size: 276980695
- config_name: chatdoctor_healthcaremagic
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 126454896
num_examples: 112165
download_size: 70518147
dataset_size: 126454896
- config_name: chatdoctor_icliniq
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7347194
num_examples: 7321
download_size: 4153680
dataset_size: 7347194
- config_name: medical_meadow_cord19
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1336834621
num_examples: 821007
download_size: 752855706
dataset_size: 1336834621
- config_name: medical_meadow_health_advice
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2196957
num_examples: 8676
download_size: 890725
dataset_size: 2196957
- config_name: medical_meadow_medical_flashcards
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 16453987
num_examples: 33955
download_size: 6999958
dataset_size: 16453987
- config_name: medical_meadow_mediqa
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 15690088
num_examples: 2208
download_size: 3719929
dataset_size: 15690088
- config_name: medical_meadow_medqa
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10225018
num_examples: 10178
download_size: 5505473
dataset_size: 10225018
- config_name: medical_meadow_mmmlu
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1442124
num_examples: 3787
download_size: 685604
dataset_size: 1442124
- config_name: medical_meadow_pubmed_causal
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 846695
num_examples: 2446
download_size: 210947
dataset_size: 846695
- config_name: medical_meadow_wikidoc
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10224074
num_examples: 10000
download_size: 5593178
dataset_size: 10224074
- config_name: medical_meadow_wikidoc_patient_information
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3262558
num_examples: 5942
download_size: 1544286
dataset_size: 3262558
configs:
- config_name: all-processed
data_files:
- split: train
path: all-processed/train-*
- config_name: chatdoctor_healthcaremagic
data_files:
- split: train
path: chatdoctor_healthcaremagic/train-*
- config_name: chatdoctor_icliniq
data_files:
- split: train
path: chatdoctor_icliniq/train-*
- config_name: medical_meadow_cord19
data_files:
- split: train
path: medical_meadow_cord19/train-*
- config_name: medical_meadow_health_advice
data_files:
- split: train
path: medical_meadow_health_advice/train-*
- config_name: medical_meadow_medical_flashcards
data_files:
- split: train
path: medical_meadow_medical_flashcards/train-*
- config_name: medical_meadow_mediqa
data_files:
- split: train
path: medical_meadow_mediqa/train-*
- config_name: medical_meadow_medqa
data_files:
- split: train
path: medical_meadow_medqa/train-*
- config_name: medical_meadow_mmmlu
data_files:
- split: train
path: medical_meadow_mmmlu/train-*
- config_name: medical_meadow_pubmed_causal
data_files:
- split: train
path: medical_meadow_pubmed_causal/train-*
- config_name: medical_meadow_wikidoc
data_files:
- split: train
path: medical_meadow_wikidoc/train-*
- config_name: medical_meadow_wikidoc_patient_information
data_files:
- split: train
path: medical_meadow_wikidoc_patient_information/train-*
---
|
allenai/objaverse-xl | allenai | 2023-10-31T16:46:54Z | 5,182 | 151 | [
"language:en",
"license:odc-by",
"arxiv:2307.05663",
"region:us"
] | [] | 2023-08-17T17:50:21Z | null | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
|
ClementRomac/cleaned_deduplicated_oscar | ClementRomac | 2023-10-25T14:05:19Z | 34,845 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-27T12:42:39Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 978937483730
num_examples: 232133013
- name: test
num_bytes: 59798696914
num_examples: 12329126
download_size: 37220219718
dataset_size: 1038736180644
---
# Dataset Card for "cleaned_deduplicated_oscar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard-old/details_tiiuae__falcon-180B | open-llm-leaderboard-old | 2023-10-24T10:18:04Z | 55,088 | 1 | [
"region:us"
] | [] | 2023-09-05T08:24:35Z | null | ---
pretty_name: Evaluation run of tiiuae/falcon-180B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tiiuae/falcon-180B](https://huggingface.co/tiiuae/falcon-180B) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 66 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 32 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-180B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T10:17:51.759984](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-180B/blob/main/results_2023-10-24T10-17-51.759984.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0028313758389261743,\n\
\ \"em_stderr\": 0.0005441551135493806,\n \"f1\": 0.06573301174496615,\n\
\ \"f1_stderr\": 0.0013666874377791776,\n \"acc\": 0.6642104078991223,\n\
\ \"acc_stderr\": 0.011605139145295384\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493806,\n\
\ \"f1\": 0.06573301174496615,\n \"f1_stderr\": 0.0013666874377791776\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45943896891584535,\n \
\ \"acc_stderr\": 0.01372709301042978\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8689818468823993,\n \"acc_stderr\": 0.009483185280160986\n\
\ }\n}\n```"
repo_url: https://huggingface.co/tiiuae/falcon-180B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: [email protected]
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|arc:challenge|25_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|arc:challenge|25_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|arc:challenge|25_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|arc:challenge|25_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|arc:challenge|25_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T09_30_46.601936
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-30-46.601936.parquet'
- split: 2023_09_25T09_42_43.006060
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-42-43.006060.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-25T09-42-43.006060.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|drop|3_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|drop|3_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T10-17-51.759984.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|gsm8k|5_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|gsm8k|5_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T10-17-51.759984.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hellaswag|10_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hellaswag|10_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hellaswag|10_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hellaswag|10_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hellaswag|10_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T11_16_10.146827
path:
- '**/details_harness|hellaswag|10_2023-09-25T11-16-10.146827.parquet'
- split: 2023_09_25T11_28_53.879118
path:
- '**/details_harness|hellaswag|10_2023-09-25T11-28-53.879118.parquet'
- split: 2023_09_25T13_20_00.898508
path:
- '**/details_harness|hellaswag|10_2023-09-25T13-20-00.898508.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-25T13-20-00.898508.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T14:31:39.488381.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T19:27:57.090829.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T01:32:36.577851.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T12:44:38.148712.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:12:02.263774.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_30T14_31_39.488381
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-30T14:31:39.488381.parquet'
- split: 2023_08_30T19_27_57.090829
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-30T19:27:57.090829.parquet'
- split: 2023_08_31T01_32_36.577851
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T01:32:36.577851.parquet'
- split: 2023_08_31T12_44_38.148712
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-31T12:44:38.148712.parquet'
- split: 2023_09_01T15_12_02.263774
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-01T15:12:02.263774.parquet'
- split: 2023_09_25T09_49_01.514206
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T09-49-01.514206.parquet'
- split: 2023_09_25T09_57_43.547983
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T09-57-43.547983.parquet'
- split: 2023_09_25T10_06_12.822356
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T10-06-12.822356.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-25T10-06-12.822356.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T17_29_05.444286
path:
- '**/details_harness|winogrande|5_2023-10-23T17-29-05.444286.parquet'
- split: 2023_10_24T10_17_51.759984
path:
- '**/details_harness|winogrande|5_2023-10-24T10-17-51.759984.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T10-17-51.759984.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T14-54-28.631498.parquet'
- split: 2023_09_21T15_14_19.361952
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T15-14-19.361952.parquet'
- split: 2023_09_22T15_08_20.868776
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-08-20.868776.parquet'
- split: 2023_09_22T15_09_58.434868
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-09-58.434868.parquet'
- split: 2023_09_22T15_40_03.532661
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-40-03.532661.parquet'
- split: 2023_09_22T19_13_36.680152
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-13-36.680152.parquet'
- split: 2023_09_22T19_25_51.687929
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-25-51.687929.parquet'
- split: 2023_09_22T19_38_30.055713
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-38-30.055713.parquet'
- split: 2023_09_22T19_56_14.188877
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-56-14.188877.parquet'
- split: 2023_09_22T20_44_00.745184
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T20-44-00.745184.parquet'
- split: 2023_09_22T21_16_36.510313
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-16-36.510313.parquet'
- split: 2023_09_22T21_30_38.663736
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-30-38.663736.parquet'
- split: 2023_09_22T21_39_07.387549
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-39-07.387549.parquet'
- split: 2023_09_22T21_46_48.392874
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-46-48.392874.parquet'
- split: 2023_09_22T22_06_13.624503
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-06-13.624503.parquet'
- split: 2023_09_22T22_21_06.865348
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-21-06.865348.parquet'
- split: 2023_09_23T09_44_24.946036
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T14-54-28.631498.parquet'
- split: 2023_09_21T15_14_19.361952
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-21T15-14-19.361952.parquet'
- split: 2023_09_22T15_08_20.868776
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-08-20.868776.parquet'
- split: 2023_09_22T15_09_58.434868
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-09-58.434868.parquet'
- split: 2023_09_22T15_40_03.532661
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T15-40-03.532661.parquet'
- split: 2023_09_22T19_13_36.680152
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-13-36.680152.parquet'
- split: 2023_09_22T19_25_51.687929
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-25-51.687929.parquet'
- split: 2023_09_22T19_38_30.055713
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-38-30.055713.parquet'
- split: 2023_09_22T19_56_14.188877
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T19-56-14.188877.parquet'
- split: 2023_09_22T20_44_00.745184
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T20-44-00.745184.parquet'
- split: 2023_09_22T21_16_36.510313
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-16-36.510313.parquet'
- split: 2023_09_22T21_30_38.663736
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-30-38.663736.parquet'
- split: 2023_09_22T21_39_07.387549
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-39-07.387549.parquet'
- split: 2023_09_22T21_46_48.392874
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T21-46-48.392874.parquet'
- split: 2023_09_22T22_06_13.624503
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-06-13.624503.parquet'
- split: 2023_09_22T22_21_06.865348
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-22T22-21-06.865348.parquet'
- split: 2023_09_23T09_44_24.946036
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-09-23T09-44-24.946036.parquet'
- config_name: results
data_files:
- split: 2023_09_21T14_54_28.631498
path:
- results_2023-09-21T14-54-28.631498.parquet
- split: 2023_09_21T15_14_19.361952
path:
- results_2023-09-21T15-14-19.361952.parquet
- split: 2023_09_22T15_08_20.868776
path:
- results_2023-09-22T15-08-20.868776.parquet
- split: 2023_09_22T15_09_58.434868
path:
- results_2023-09-22T15-09-58.434868.parquet
- split: 2023_09_22T15_40_03.532661
path:
- results_2023-09-22T15-40-03.532661.parquet
- split: 2023_09_22T19_13_36.680152
path:
- results_2023-09-22T19-13-36.680152.parquet
- split: 2023_09_22T19_25_51.687929
path:
- results_2023-09-22T19-25-51.687929.parquet
- split: 2023_09_22T19_38_30.055713
path:
- results_2023-09-22T19-38-30.055713.parquet
- split: 2023_09_22T19_56_14.188877
path:
- results_2023-09-22T19-56-14.188877.parquet
- split: 2023_09_22T20_44_00.745184
path:
- results_2023-09-22T20-44-00.745184.parquet
- split: 2023_09_22T21_16_36.510313
path:
- results_2023-09-22T21-16-36.510313.parquet
- split: 2023_09_22T21_30_38.663736
path:
- results_2023-09-22T21-30-38.663736.parquet
- split: 2023_09_22T21_39_07.387549
path:
- results_2023-09-22T21-39-07.387549.parquet
- split: 2023_09_22T21_46_48.392874
path:
- results_2023-09-22T21-46-48.392874.parquet
- split: 2023_09_22T22_06_13.624503
path:
- results_2023-09-22T22-06-13.624503.parquet
- split: 2023_09_22T22_21_06.865348
path:
- results_2023-09-22T22-21-06.865348.parquet
- split: 2023_09_23T09_44_24.946036
path:
- results_2023-09-23T09-44-24.946036.parquet
- split: 2023_09_25T09_30_46.601936
path:
- results_2023-09-25T09-30-46.601936.parquet
- split: 2023_09_25T09_42_43.006060
path:
- results_2023-09-25T09-42-43.006060.parquet
- split: 2023_09_25T09_49_01.514206
path:
- results_2023-09-25T09-49-01.514206.parquet
- split: 2023_09_25T09_57_43.547983
path:
- results_2023-09-25T09-57-43.547983.parquet
- split: 2023_09_25T10_06_12.822356
path:
- results_2023-09-25T10-06-12.822356.parquet
- split: 2023_09_25T11_16_10.146827
path:
- results_2023-09-25T11-16-10.146827.parquet
- split: 2023_09_25T11_28_53.879118
path:
- results_2023-09-25T11-28-53.879118.parquet
- split: 2023_09_25T13_20_00.898508
path:
- results_2023-09-25T13-20-00.898508.parquet
- split: 2023_10_23T17_29_05.444286
path:
- results_2023-10-23T17-29-05.444286.parquet
- split: 2023_10_24T10_17_51.759984
path:
- results_2023-10-24T10-17-51.759984.parquet
- split: latest
path:
- results_2023-10-24T10-17-51.759984.parquet
---
# Dataset Card for Evaluation run of tiiuae/falcon-180B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tiiuae/falcon-180B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [tiiuae/falcon-180B](https://huggingface.co/tiiuae/falcon-180B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 66 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 32 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-180B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T10:17:51.759984](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-180B/blob/main/results_2023-10-24T10-17-51.759984.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493806,
"f1": 0.06573301174496615,
"f1_stderr": 0.0013666874377791776,
"acc": 0.6642104078991223,
"acc_stderr": 0.011605139145295384
},
"harness|drop|3": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493806,
"f1": 0.06573301174496615,
"f1_stderr": 0.0013666874377791776
},
"harness|gsm8k|5": {
"acc": 0.45943896891584535,
"acc_stderr": 0.01372709301042978
},
"harness|winogrande|5": {
"acc": 0.8689818468823993,
"acc_stderr": 0.009483185280160986
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
clouditera/security-paper-datasets | clouditera | 2023-10-16T10:34:13Z | 766 | 99 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-25T02:11:45Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1690579945
num_examples: 428155
download_size: 751689097
dataset_size: 1690579945
---
# Dataset Card for "security-paper-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
keivalya/MedQuad-MedicalQnADataset | keivalya | 2023-10-11T10:50:41Z | 2,914 | 102 | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering",
"text2text-generation"
] | 2023-10-11T10:38:26Z | null | ---
task_categories:
- question-answering
- text2text-generation
pretty_name: MedQuad-KV
---
### Reference:
- "A Question-Entailment Approach to Question Answering". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019. |
erhwenkuo/ceval-exam-zhtw | erhwenkuo | 2023-10-10T02:14:55Z | 16,079 | 0 | [
"language:zh",
"license:cc",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2305.08322",
"region:us",
"\"llm-eval\""
] | [] | 2023-10-08T12:22:42Z | null | ---
dataset_info:
- config_name: accountant
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 177004
num_examples: 443
- name: val
num_bytes: 19555
num_examples: 49
- name: dev
num_bytes: 3414
num_examples: 5
download_size: 151561
dataset_size: 199973
- config_name: advanced_mathematics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 50031
num_examples: 173
- name: val
num_bytes: 5331
num_examples: 19
- name: dev
num_bytes: 7021
num_examples: 5
download_size: 50945
dataset_size: 62383
- config_name: art_studies
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 41230
num_examples: 298
- name: val
num_bytes: 4581
num_examples: 33
- name: dev
num_bytes: 1439
num_examples: 5
download_size: 46573
dataset_size: 47250
- config_name: basic_medicine
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 28820
num_examples: 175
- name: val
num_bytes: 2627
num_examples: 19
- name: dev
num_bytes: 1825
num_examples: 5
download_size: 37502
dataset_size: 33272
- config_name: business_administration
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 78396
num_examples: 301
- name: val
num_bytes: 9225
num_examples: 33
- name: dev
num_bytes: 3155
num_examples: 5
download_size: 75404
dataset_size: 90776
- config_name: chinese_language_and_literature
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 32328
num_examples: 209
- name: val
num_bytes: 3446
num_examples: 23
- name: dev
num_bytes: 1892
num_examples: 5
download_size: 43537
dataset_size: 37666
- config_name: civil_servant
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 181519
num_examples: 429
- name: val
num_bytes: 21273
num_examples: 47
- name: dev
num_bytes: 4576
num_examples: 5
download_size: 180536
dataset_size: 207368
- config_name: clinical_medicine
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 42161
num_examples: 200
- name: val
num_bytes: 4167
num_examples: 22
- name: dev
num_bytes: 1951
num_examples: 5
download_size: 48783
dataset_size: 48279
- config_name: college_chemistry
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 45801
num_examples: 224
- name: val
num_bytes: 4443
num_examples: 24
- name: dev
num_bytes: 3611
num_examples: 5
download_size: 53682
dataset_size: 53855
- config_name: college_economics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 119746
num_examples: 497
- name: val
num_bytes: 14461
num_examples: 55
- name: dev
num_bytes: 3673
num_examples: 5
download_size: 106480
dataset_size: 137880
- config_name: college_physics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 55731
num_examples: 176
- name: val
num_bytes: 6145
num_examples: 19
- name: dev
num_bytes: 3824
num_examples: 5
download_size: 62806
dataset_size: 65700
- config_name: college_programming
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 84024
num_examples: 342
- name: val
num_bytes: 9615
num_examples: 37
- name: dev
num_bytes: 2900
num_examples: 5
download_size: 83274
dataset_size: 96539
- config_name: computer_architecture
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 41173
num_examples: 193
- name: val
num_bytes: 4188
num_examples: 21
- name: dev
num_bytes: 2841
num_examples: 5
download_size: 48203
dataset_size: 48202
- config_name: computer_network
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 35495
num_examples: 171
- name: val
num_bytes: 3814
num_examples: 19
- name: dev
num_bytes: 2364
num_examples: 5
download_size: 43988
dataset_size: 41673
- config_name: discrete_mathematics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 36057
num_examples: 153
- name: val
num_bytes: 3424
num_examples: 16
- name: dev
num_bytes: 2002
num_examples: 5
download_size: 43029
dataset_size: 41483
- config_name: education_science
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 55756
num_examples: 270
- name: val
num_bytes: 5522
num_examples: 29
- name: dev
num_bytes: 3093
num_examples: 5
download_size: 59946
dataset_size: 64371
- config_name: electrical_engineer
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 73769
num_examples: 339
- name: val
num_bytes: 8327
num_examples: 37
- name: dev
num_bytes: 2180
num_examples: 5
download_size: 74147
dataset_size: 84276
- config_name: environmental_impact_assessment_engineer
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 84701
num_examples: 281
- name: val
num_bytes: 9186
num_examples: 31
- name: dev
num_bytes: 2495
num_examples: 5
download_size: 73813
dataset_size: 96382
- config_name: fire_engineer
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 83743
num_examples: 282
- name: val
num_bytes: 10016
num_examples: 31
- name: dev
num_bytes: 2209
num_examples: 5
download_size: 82070
dataset_size: 95968
- config_name: high_school_biology
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 55242
num_examples: 175
- name: val
num_bytes: 6105
num_examples: 19
- name: dev
num_bytes: 2164
num_examples: 5
download_size: 60835
dataset_size: 63511
- config_name: high_school_chemistry
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 46918
num_examples: 172
- name: val
num_bytes: 5625
num_examples: 19
- name: dev
num_bytes: 2576
num_examples: 5
download_size: 55719
dataset_size: 55119
- config_name: high_school_chinese
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 110380
num_examples: 178
- name: val
num_bytes: 10475
num_examples: 19
- name: dev
num_bytes: 5290
num_examples: 5
download_size: 120269
dataset_size: 126145
- config_name: high_school_geography
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 41232
num_examples: 178
- name: val
num_bytes: 3985
num_examples: 19
- name: dev
num_bytes: 2087
num_examples: 5
download_size: 50092
dataset_size: 47304
- config_name: high_school_history
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 56205
num_examples: 182
- name: val
num_bytes: 6624
num_examples: 20
- name: dev
num_bytes: 2421
num_examples: 5
download_size: 68561
dataset_size: 65250
- config_name: high_school_mathematics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 41095
num_examples: 166
- name: val
num_bytes: 5144
num_examples: 18
- name: dev
num_bytes: 3552
num_examples: 5
download_size: 53179
dataset_size: 49791
- config_name: high_school_physics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 61682
num_examples: 175
- name: val
num_bytes: 7266
num_examples: 19
- name: dev
num_bytes: 2266
num_examples: 5
download_size: 66481
dataset_size: 71214
- config_name: high_school_politics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 83428
num_examples: 176
- name: val
num_bytes: 8912
num_examples: 19
- name: dev
num_bytes: 4730
num_examples: 5
download_size: 90433
dataset_size: 97070
- config_name: ideological_and_moral_cultivation
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 35315
num_examples: 172
- name: val
num_bytes: 3241
num_examples: 19
- name: dev
num_bytes: 1296
num_examples: 5
download_size: 41159
dataset_size: 39852
- config_name: law
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 79806
num_examples: 221
- name: val
num_bytes: 8119
num_examples: 24
- name: dev
num_bytes: 4142
num_examples: 5
download_size: 83236
dataset_size: 92067
- config_name: legal_professional
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 122000
num_examples: 215
- name: val
num_bytes: 12215
num_examples: 23
- name: dev
num_bytes: 6974
num_examples: 5
download_size: 125256
dataset_size: 141189
- config_name: logic
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 144288
num_examples: 204
- name: val
num_bytes: 15558
num_examples: 22
- name: dev
num_bytes: 5641
num_examples: 5
download_size: 142564
dataset_size: 165487
- config_name: mao_zedong_thought
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 56708
num_examples: 219
- name: val
num_bytes: 5487
num_examples: 24
- name: dev
num_bytes: 3352
num_examples: 5
download_size: 57948
dataset_size: 65547
- config_name: marxism
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 38674
num_examples: 179
- name: val
num_bytes: 4251
num_examples: 19
- name: dev
num_bytes: 2142
num_examples: 5
download_size: 44933
dataset_size: 45067
- config_name: metrology_engineer
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 47544
num_examples: 219
- name: val
num_bytes: 6134
num_examples: 24
- name: dev
num_bytes: 2485
num_examples: 5
download_size: 54828
dataset_size: 56163
- config_name: middle_school_biology
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 47267
num_examples: 192
- name: val
num_bytes: 5263
num_examples: 21
- name: dev
num_bytes: 4327
num_examples: 5
download_size: 58472
dataset_size: 56857
- config_name: middle_school_chemistry
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 47575
num_examples: 185
- name: val
num_bytes: 5654
num_examples: 20
- name: dev
num_bytes: 3866
num_examples: 5
download_size: 59099
dataset_size: 57095
- config_name: middle_school_geography
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 23332
num_examples: 108
- name: val
num_bytes: 2641
num_examples: 12
- name: dev
num_bytes: 2148
num_examples: 5
download_size: 37389
dataset_size: 28121
- config_name: middle_school_history
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 47076
num_examples: 207
- name: val
num_bytes: 5990
num_examples: 22
- name: dev
num_bytes: 2014
num_examples: 5
download_size: 56042
dataset_size: 55080
- config_name: middle_school_mathematics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 33142
num_examples: 177
- name: val
num_bytes: 4897
num_examples: 19
- name: dev
num_bytes: 3187
num_examples: 5
download_size: 44657
dataset_size: 41226
- config_name: middle_school_physics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 48796
num_examples: 178
- name: val
num_bytes: 5279
num_examples: 19
- name: dev
num_bytes: 3531
num_examples: 5
download_size: 59820
dataset_size: 57606
- config_name: middle_school_politics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 72499
num_examples: 193
- name: val
num_bytes: 7326
num_examples: 21
- name: dev
num_bytes: 3687
num_examples: 5
download_size: 76847
dataset_size: 83512
- config_name: modern_chinese_history
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 51247
num_examples: 212
- name: val
num_bytes: 5188
num_examples: 23
- name: dev
num_bytes: 2983
num_examples: 5
download_size: 59728
dataset_size: 59418
- config_name: operating_system
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 31467
num_examples: 179
- name: val
num_bytes: 3335
num_examples: 19
- name: dev
num_bytes: 2611
num_examples: 5
download_size: 40349
dataset_size: 37413
- config_name: physician
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 89819
num_examples: 443
- name: val
num_bytes: 8713
num_examples: 49
- name: dev
num_bytes: 2033
num_examples: 5
download_size: 91464
dataset_size: 100565
- config_name: plant_protection
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 31877
num_examples: 199
- name: val
num_bytes: 3634
num_examples: 22
- name: dev
num_bytes: 3726
num_examples: 5
download_size: 42813
dataset_size: 39237
- config_name: probability_and_statistics
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 56749
num_examples: 166
- name: val
num_bytes: 5781
num_examples: 18
- name: dev
num_bytes: 6769
num_examples: 5
download_size: 63258
dataset_size: 69299
- config_name: professional_tour_guide
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 41231
num_examples: 266
- name: val
num_bytes: 4509
num_examples: 29
- name: dev
num_bytes: 1764
num_examples: 5
download_size: 51642
dataset_size: 47504
- config_name: sports_science
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 32536
num_examples: 180
- name: val
num_bytes: 3493
num_examples: 19
- name: dev
num_bytes: 4182
num_examples: 5
download_size: 45905
dataset_size: 40211
- config_name: tax_accountant
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 174509
num_examples: 443
- name: val
num_bytes: 18938
num_examples: 49
- name: dev
num_bytes: 4274
num_examples: 5
download_size: 148037
dataset_size: 197721
- config_name: teacher_qualification
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 107372
num_examples: 399
- name: val
num_bytes: 12220
num_examples: 44
- name: dev
num_bytes: 3212
num_examples: 5
download_size: 105439
dataset_size: 122804
- config_name: urban_and_rural_planner
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 110473
num_examples: 418
- name: val
num_bytes: 12793
num_examples: 46
- name: dev
num_bytes: 3184
num_examples: 5
download_size: 101932
dataset_size: 126450
- config_name: veterinary_medicine
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: test
num_bytes: 39465
num_examples: 210
- name: val
num_bytes: 4562
num_examples: 23
- name: dev
num_bytes: 2365
num_examples: 5
download_size: 48753
dataset_size: 46392
configs:
- config_name: accountant
data_files:
- split: test
path: accountant/test-*
- split: val
path: accountant/val-*
- split: dev
path: accountant/dev-*
- config_name: advanced_mathematics
data_files:
- split: test
path: advanced_mathematics/test-*
- split: val
path: advanced_mathematics/val-*
- split: dev
path: advanced_mathematics/dev-*
- config_name: art_studies
data_files:
- split: test
path: art_studies/test-*
- split: val
path: art_studies/val-*
- split: dev
path: art_studies/dev-*
- config_name: basic_medicine
data_files:
- split: test
path: basic_medicine/test-*
- split: val
path: basic_medicine/val-*
- split: dev
path: basic_medicine/dev-*
- config_name: business_administration
data_files:
- split: test
path: business_administration/test-*
- split: val
path: business_administration/val-*
- split: dev
path: business_administration/dev-*
- config_name: chinese_language_and_literature
data_files:
- split: test
path: chinese_language_and_literature/test-*
- split: val
path: chinese_language_and_literature/val-*
- split: dev
path: chinese_language_and_literature/dev-*
- config_name: civil_servant
data_files:
- split: test
path: civil_servant/test-*
- split: val
path: civil_servant/val-*
- split: dev
path: civil_servant/dev-*
- config_name: clinical_medicine
data_files:
- split: test
path: clinical_medicine/test-*
- split: val
path: clinical_medicine/val-*
- split: dev
path: clinical_medicine/dev-*
- config_name: college_chemistry
data_files:
- split: test
path: college_chemistry/test-*
- split: val
path: college_chemistry/val-*
- split: dev
path: college_chemistry/dev-*
- config_name: college_economics
data_files:
- split: test
path: college_economics/test-*
- split: val
path: college_economics/val-*
- split: dev
path: college_economics/dev-*
- config_name: college_physics
data_files:
- split: test
path: college_physics/test-*
- split: val
path: college_physics/val-*
- split: dev
path: college_physics/dev-*
- config_name: college_programming
data_files:
- split: test
path: college_programming/test-*
- split: val
path: college_programming/val-*
- split: dev
path: college_programming/dev-*
- config_name: computer_architecture
data_files:
- split: test
path: computer_architecture/test-*
- split: val
path: computer_architecture/val-*
- split: dev
path: computer_architecture/dev-*
- config_name: computer_network
data_files:
- split: test
path: computer_network/test-*
- split: val
path: computer_network/val-*
- split: dev
path: computer_network/dev-*
- config_name: discrete_mathematics
data_files:
- split: test
path: discrete_mathematics/test-*
- split: val
path: discrete_mathematics/val-*
- split: dev
path: discrete_mathematics/dev-*
- config_name: education_science
data_files:
- split: test
path: education_science/test-*
- split: val
path: education_science/val-*
- split: dev
path: education_science/dev-*
- config_name: electrical_engineer
data_files:
- split: test
path: electrical_engineer/test-*
- split: val
path: electrical_engineer/val-*
- split: dev
path: electrical_engineer/dev-*
- config_name: environmental_impact_assessment_engineer
data_files:
- split: test
path: environmental_impact_assessment_engineer/test-*
- split: val
path: environmental_impact_assessment_engineer/val-*
- split: dev
path: environmental_impact_assessment_engineer/dev-*
- config_name: fire_engineer
data_files:
- split: test
path: fire_engineer/test-*
- split: val
path: fire_engineer/val-*
- split: dev
path: fire_engineer/dev-*
- config_name: high_school_biology
data_files:
- split: test
path: high_school_biology/test-*
- split: val
path: high_school_biology/val-*
- split: dev
path: high_school_biology/dev-*
- config_name: high_school_chemistry
data_files:
- split: test
path: high_school_chemistry/test-*
- split: val
path: high_school_chemistry/val-*
- split: dev
path: high_school_chemistry/dev-*
- config_name: high_school_chinese
data_files:
- split: test
path: high_school_chinese/test-*
- split: val
path: high_school_chinese/val-*
- split: dev
path: high_school_chinese/dev-*
- config_name: high_school_geography
data_files:
- split: test
path: high_school_geography/test-*
- split: val
path: high_school_geography/val-*
- split: dev
path: high_school_geography/dev-*
- config_name: high_school_history
data_files:
- split: test
path: high_school_history/test-*
- split: val
path: high_school_history/val-*
- split: dev
path: high_school_history/dev-*
- config_name: high_school_mathematics
data_files:
- split: test
path: high_school_mathematics/test-*
- split: val
path: high_school_mathematics/val-*
- split: dev
path: high_school_mathematics/dev-*
- config_name: high_school_physics
data_files:
- split: test
path: high_school_physics/test-*
- split: val
path: high_school_physics/val-*
- split: dev
path: high_school_physics/dev-*
- config_name: high_school_politics
data_files:
- split: test
path: high_school_politics/test-*
- split: val
path: high_school_politics/val-*
- split: dev
path: high_school_politics/dev-*
- config_name: ideological_and_moral_cultivation
data_files:
- split: test
path: ideological_and_moral_cultivation/test-*
- split: val
path: ideological_and_moral_cultivation/val-*
- split: dev
path: ideological_and_moral_cultivation/dev-*
- config_name: law
data_files:
- split: test
path: law/test-*
- split: val
path: law/val-*
- split: dev
path: law/dev-*
- config_name: legal_professional
data_files:
- split: test
path: legal_professional/test-*
- split: val
path: legal_professional/val-*
- split: dev
path: legal_professional/dev-*
- config_name: logic
data_files:
- split: test
path: logic/test-*
- split: val
path: logic/val-*
- split: dev
path: logic/dev-*
- config_name: mao_zedong_thought
data_files:
- split: test
path: mao_zedong_thought/test-*
- split: val
path: mao_zedong_thought/val-*
- split: dev
path: mao_zedong_thought/dev-*
- config_name: marxism
data_files:
- split: test
path: marxism/test-*
- split: val
path: marxism/val-*
- split: dev
path: marxism/dev-*
- config_name: metrology_engineer
data_files:
- split: test
path: metrology_engineer/test-*
- split: val
path: metrology_engineer/val-*
- split: dev
path: metrology_engineer/dev-*
- config_name: middle_school_biology
data_files:
- split: test
path: middle_school_biology/test-*
- split: val
path: middle_school_biology/val-*
- split: dev
path: middle_school_biology/dev-*
- config_name: middle_school_chemistry
data_files:
- split: test
path: middle_school_chemistry/test-*
- split: val
path: middle_school_chemistry/val-*
- split: dev
path: middle_school_chemistry/dev-*
- config_name: middle_school_geography
data_files:
- split: test
path: middle_school_geography/test-*
- split: val
path: middle_school_geography/val-*
- split: dev
path: middle_school_geography/dev-*
- config_name: middle_school_history
data_files:
- split: test
path: middle_school_history/test-*
- split: val
path: middle_school_history/val-*
- split: dev
path: middle_school_history/dev-*
- config_name: middle_school_mathematics
data_files:
- split: test
path: middle_school_mathematics/test-*
- split: val
path: middle_school_mathematics/val-*
- split: dev
path: middle_school_mathematics/dev-*
- config_name: middle_school_physics
data_files:
- split: test
path: middle_school_physics/test-*
- split: val
path: middle_school_physics/val-*
- split: dev
path: middle_school_physics/dev-*
- config_name: middle_school_politics
data_files:
- split: test
path: middle_school_politics/test-*
- split: val
path: middle_school_politics/val-*
- split: dev
path: middle_school_politics/dev-*
- config_name: modern_chinese_history
data_files:
- split: test
path: modern_chinese_history/test-*
- split: val
path: modern_chinese_history/val-*
- split: dev
path: modern_chinese_history/dev-*
- config_name: operating_system
data_files:
- split: test
path: operating_system/test-*
- split: val
path: operating_system/val-*
- split: dev
path: operating_system/dev-*
- config_name: physician
data_files:
- split: test
path: physician/test-*
- split: val
path: physician/val-*
- split: dev
path: physician/dev-*
- config_name: plant_protection
data_files:
- split: test
path: plant_protection/test-*
- split: val
path: plant_protection/val-*
- split: dev
path: plant_protection/dev-*
- config_name: probability_and_statistics
data_files:
- split: test
path: probability_and_statistics/test-*
- split: val
path: probability_and_statistics/val-*
- split: dev
path: probability_and_statistics/dev-*
- config_name: professional_tour_guide
data_files:
- split: test
path: professional_tour_guide/test-*
- split: val
path: professional_tour_guide/val-*
- split: dev
path: professional_tour_guide/dev-*
- config_name: sports_science
data_files:
- split: test
path: sports_science/test-*
- split: val
path: sports_science/val-*
- split: dev
path: sports_science/dev-*
- config_name: tax_accountant
data_files:
- split: test
path: tax_accountant/test-*
- split: val
path: tax_accountant/val-*
- split: dev
path: tax_accountant/dev-*
- config_name: teacher_qualification
data_files:
- split: test
path: teacher_qualification/test-*
- split: val
path: teacher_qualification/val-*
- split: dev
path: teacher_qualification/dev-*
- config_name: urban_and_rural_planner
data_files:
- split: test
path: urban_and_rural_planner/test-*
- split: val
path: urban_and_rural_planner/val-*
- split: dev
path: urban_and_rural_planner/dev-*
- config_name: veterinary_medicine
data_files:
- split: test
path: veterinary_medicine/test-*
- split: val
path: veterinary_medicine/val-*
- split: dev
path: veterinary_medicine/dev-*
license: cc
language:
- zh
tags:
- '"llm-eval"'
---
# Dataset Card for "ceval-exam-zhtw"
C-Eval 是一個針對基礎模型的綜合中文評估套件。它由 13,948 道多項選擇題組成,涵蓋 52 個不同的學科和四個難度級別。[原始網站](https://cevalbenchmark.com/)和 [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) 或查看[論文](https://arxiv.org/abs/2305.08322)以了解更多詳細資訊。
C-Eval 主要的數據都是使用簡體中文來撰寫并且用來評測簡體中文的 LLM 的效能來設計的,本數據集使用 OpenCC 來進行簡繁的中文轉換,主要目的方便繁中 LLM 的開發與驗測。
## 下載
使用 Hugging Face `datasets` 直接載入資料集:
```python
from datasets import load_dataset
dataset=load_dataset(r"erhwenkuo/ceval-exam-zhtw",name="computer_network")
print(dataset['val'][0])
# {'id': 0, 'question': '使用位填充方法,以01111110為位首flag,資料為011011111111111111110010,求問傳送時要新增幾個0____', 'A': '1', 'B': '2', 'C': '3', 'D': '4', 'answer': 'C', 'explanation': ''}
```
## 授權
C-Eval 資料集根據 Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License 授權。
## Citation
如果您使用這個資料集,請引用原始 C-Eval 的論文。
```
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
}
``` |
jackhhao/jailbreak-classification | jackhhao | 2023-09-30T01:55:08Z | 2,155 | 57 | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"jailbreak",
"security",
"moderation"
] | [
"text-classification"
] | 2023-09-30T00:56:39Z | 2 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- jailbreak
- security
- moderation
pretty_name: Jailbreak Classification
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: "balanced/jailbreak_dataset_train_balanced.csv"
- split: test
path: "balanced/jailbreak_dataset_test_balanced.csv"
---
# Jailbreak Classification
### Dataset Summary
Dataset used to classify prompts as jailbreak vs. benign.
## Dataset Structure
### Data Fields
- `prompt`: an LLM prompt
- `type`: classification label, either `jailbreak` or `benign`
## Dataset Creation
### Curation Rationale
Created to help detect & prevent harmful jailbreak prompts when users interact with LLMs.
### Source Data
Jailbreak prompts sourced from: <https://github.com/verazuo/jailbreak_llms>
Benign prompts sourced from:
- [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
- <https://github.com/teknium1/GPTeacher> |
glaiveai/glaive-function-calling-v2 | glaiveai | 2023-09-27T18:04:08Z | 1,522 | 427 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2023-08-15T19:31:27Z | null | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
--- |
amitness/logits-italian-128 | amitness | 2023-09-21T13:43:52Z | 24,873 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-13T17:48:19Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: teacher_logits
sequence:
sequence: float64
- name: teacher_indices
sequence:
sequence: int64
- name: teacher_mask_indices
sequence: int64
splits:
- name: train
num_bytes: 37616201036
num_examples: 8305825
download_size: 16084893126
dataset_size: 37616201036
---
# Dataset Card for "logits-italian-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
knowrohit07/know_sql | knowrohit07 | 2023-09-20T20:13:06Z | 426 | 112 | [
"license:openrail",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-09-16T12:18:52Z | null | ---
license: openrail
---
please use the val ign file for training, its much cleaner. thanks :) |
edbeeching/gia-dataset-tokenized-2024-2 | edbeeching | 2023-09-15T11:03:29Z | 330,932 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-09-15T08:07:15Z | null | ---
dataset_info:
- config_name: atari-alien
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2427492496
num_examples: 1836
download_size: 197411801
dataset_size: 2427492496
- config_name: atari-amidar
features:
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23292403388
num_examples: 17641
- name: test
num_bytes: 2157941388
num_examples: 1637
download_size: 1619960876
dataset_size: 25450344776
- config_name: atari-assault
features:
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23077576568
num_examples: 17434
- name: test
num_bytes: 1898092400
num_examples: 1436
download_size: 760479036
dataset_size: 24975668968
- config_name: atari-asterix
features:
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 25094377660
num_examples: 19161
download_size: 943683526
dataset_size: 25094377660
- config_name: atari-asteroids
features:
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22677165856
num_examples: 17112
download_size: 807221186
dataset_size: 22677165856
- config_name: atari-atlantis
features:
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22825149408
num_examples: 17240
download_size: 745609354
dataset_size: 22825149408
- config_name: atari-bankheist
features:
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23741888116
num_examples: 18043
- name: test
num_bytes: 2701097304
num_examples: 2050
download_size: 2847993069
dataset_size: 26442985420
- config_name: atari-battlezone
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2683381416
num_examples: 2030
download_size: 162167846
dataset_size: 2683381416
- config_name: atari-berzerk
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2683232284
num_examples: 2025
download_size: 98071291
dataset_size: 2683232284
- config_name: atari-bowling
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2638612892
num_examples: 2001
download_size: 57099861
dataset_size: 2638612892
- config_name: atari-boxing
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2925635312
num_examples: 2252
download_size: 154591181
dataset_size: 2925635312
- config_name: atari-breakout
features:
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 21372025124
num_examples: 16135
- name: test
num_bytes: 2843462328
num_examples: 2146
download_size: 740521401
dataset_size: 24215487452
- config_name: atari-centipede
features:
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 24525541956
num_examples: 18727
- name: test
num_bytes: 2743854332
num_examples: 2097
download_size: 886355860
dataset_size: 27269396288
- config_name: atari-choppercommand
features:
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 21916144968
num_examples: 16598
- name: test
num_bytes: 3130204472
num_examples: 2370
download_size: 1120222280
dataset_size: 25046349440
- config_name: atari-crazyclimber
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2452295076
num_examples: 1855
download_size: 147409815
dataset_size: 2452295076
- config_name: atari-defender
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2667101644
num_examples: 2013
download_size: 76162534
dataset_size: 2667101644
- config_name: atari-demonattack
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2655965584
num_examples: 2004
download_size: 71540075
dataset_size: 2655965584
- config_name: atari-doubledunk
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2654251456
num_examples: 2032
download_size: 140407266
dataset_size: 2654251456
- config_name: atari-fishingderby
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2865449308
num_examples: 2177
download_size: 236590614
dataset_size: 2865449308
- config_name: atari-freeway
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2646386200
num_examples: 2002
download_size: 182728240
dataset_size: 2646386200
- config_name: atari-frostbite
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23145553316
num_examples: 17551
- name: test
num_bytes: 2683086716
num_examples: 2033
download_size: 1661407235
dataset_size: 25828640032
- config_name: atari-gravitar
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: input_types
sequence: int64
- name: local_positions
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 26186279752
num_examples: 20126
- name: test
num_bytes: 2990268724
num_examples: 2299
download_size: 939142901
dataset_size: 29176548476
- config_name: atari-hero
features:
- name: input_ids
sequence: int32
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2756503068
num_examples: 2089
download_size: 131026317
dataset_size: 2756503068
- config_name: atari-icehockey
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2538945980
num_examples: 1921
download_size: 89405392
dataset_size: 2538945980
- config_name: atari-jamesbond
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 4473778328
num_examples: 3378
download_size: 224917482
dataset_size: 4473778328
- config_name: atari-kangaroo
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2993217516
num_examples: 2285
download_size: 140119408
dataset_size: 2993217516
- config_name: atari-mspacman
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2479651844
num_examples: 1879
download_size: 217259145
dataset_size: 2479651844
- config_name: atari-namethisgame
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 3006648420
num_examples: 2271
download_size: 158870157
dataset_size: 3006648420
- config_name: atari-phoenix
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2655773200
num_examples: 2004
download_size: 79861580
dataset_size: 2655773200
- config_name: atari-qbert
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2547887868
num_examples: 1929
download_size: 174392419
dataset_size: 2547887868
- config_name: atari-riverraid
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2555182372
num_examples: 1943
download_size: 174672084
dataset_size: 2555182372
- config_name: atari-roadrunner
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2521407028
num_examples: 1915
download_size: 125390334
dataset_size: 2521407028
- config_name: atari-robotank
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22475017052
num_examples: 16985
- name: test
num_bytes: 2229677068
num_examples: 1685
download_size: 1298755118
dataset_size: 24704694120
- config_name: atari-seaquest
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 23841045496
num_examples: 18114
- name: test
num_bytes: 2738008960
num_examples: 2080
download_size: 910338340
dataset_size: 26579054456
- config_name: atari-skiing
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 26305597476
num_examples: 20359
- name: test
num_bytes: 2941523916
num_examples: 2277
download_size: 1797518108
dataset_size: 29247121392
- config_name: atari-solaris
features:
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2273188716
num_examples: 1717
download_size: 126936781
dataset_size: 2273188716
- config_name: atari-spaceinvaders
features:
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 4137369016
num_examples: 3122
download_size: 146426375
dataset_size: 4137369016
- config_name: atari-stargunner
features:
- name: input_types
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2565341980
num_examples: 1937
download_size: 72577790
dataset_size: 2565341980
- config_name: atari-surround
features:
- name: loss_mask
sequence: bool
- name: local_positions
sequence: int64
- name: input_types
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22468793380
num_examples: 17023
- name: test
num_bytes: 2933488488
num_examples: 2222
download_size: 904796125
dataset_size: 25402281868
- config_name: atari-tennis
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2484015692
num_examples: 1877
download_size: 95167453
dataset_size: 2484015692
- config_name: atari-timepilot
features:
- name: input_ids
sequence: int32
- name: local_positions
sequence: int64
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: loss_mask
sequence: bool
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 2558172240
num_examples: 1932
download_size: 86471773
dataset_size: 2558172240
- config_name: atari-tutankham
features:
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: attention_mask
sequence: bool
splits:
- name: test
num_bytes: 3517105220
num_examples: 2655
download_size: 144491974
dataset_size: 3517105220
- config_name: atari-videopinball
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22581644248
num_examples: 17042
- name: test
num_bytes: 856644644
num_examples: 647
download_size: 1483962740
dataset_size: 23438288892
- config_name: atari-wizardofwor
features:
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: input_types
sequence: int64
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: local_positions
sequence: int64
- name: loss_mask
sequence: bool
- name: input_ids
sequence: int32
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22744043928
num_examples: 17218
- name: test
num_bytes: 2648734220
num_examples: 2005
download_size: 1739703310
dataset_size: 25392778148
- config_name: atari-yarsrevenge
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22080700236
num_examples: 16669
- name: test
num_bytes: 2579104820
num_examples: 1947
download_size: 3451148232
dataset_size: 24659805056
- config_name: atari-zaxxon
features:
- name: input_types
sequence: int64
- name: loss_mask
sequence: bool
- name: patch_positions
sequence:
sequence:
sequence: float64
- name: local_positions
sequence: int64
- name: input_ids
sequence: int32
- name: patches
sequence:
sequence:
sequence:
sequence: uint8
- name: attention_mask
sequence: bool
splits:
- name: train
num_bytes: 22058040148
num_examples: 16667
- name: test
num_bytes: 2768806832
num_examples: 2092
download_size: 1229966010
dataset_size: 24826846980
configs:
- config_name: atari-alien
data_files:
- split: test
path: atari-alien/test-*
- config_name: atari-amidar
data_files:
- split: train
path: atari-amidar/train-*
- split: test
path: atari-amidar/test-*
- config_name: atari-assault
data_files:
- split: train
path: atari-assault/train-*
- split: test
path: atari-assault/test-*
- config_name: atari-asterix
data_files:
- split: train
path: atari-asterix/train-*
- config_name: atari-asteroids
data_files:
- split: train
path: atari-asteroids/train-*
- config_name: atari-atlantis
data_files:
- split: train
path: atari-atlantis/train-*
- config_name: atari-bankheist
data_files:
- split: train
path: atari-bankheist/train-*
- split: test
path: atari-bankheist/test-*
- config_name: atari-battlezone
data_files:
- split: test
path: atari-battlezone/test-*
- config_name: atari-berzerk
data_files:
- split: test
path: atari-berzerk/test-*
- config_name: atari-bowling
data_files:
- split: test
path: atari-bowling/test-*
- config_name: atari-boxing
data_files:
- split: test
path: atari-boxing/test-*
- config_name: atari-breakout
data_files:
- split: train
path: atari-breakout/train-*
- split: test
path: atari-breakout/test-*
- config_name: atari-centipede
data_files:
- split: train
path: atari-centipede/train-*
- split: test
path: atari-centipede/test-*
- config_name: atari-choppercommand
data_files:
- split: train
path: atari-choppercommand/train-*
- split: test
path: atari-choppercommand/test-*
- config_name: atari-crazyclimber
data_files:
- split: test
path: atari-crazyclimber/test-*
- config_name: atari-defender
data_files:
- split: test
path: atari-defender/test-*
- config_name: atari-demonattack
data_files:
- split: test
path: atari-demonattack/test-*
- config_name: atari-doubledunk
data_files:
- split: test
path: atari-doubledunk/test-*
- config_name: atari-fishingderby
data_files:
- split: test
path: atari-fishingderby/test-*
- config_name: atari-freeway
data_files:
- split: test
path: atari-freeway/test-*
- config_name: atari-frostbite
data_files:
- split: train
path: atari-frostbite/train-*
- split: test
path: atari-frostbite/test-*
- config_name: atari-gravitar
data_files:
- split: train
path: atari-gravitar/train-*
- split: test
path: atari-gravitar/test-*
- config_name: atari-hero
data_files:
- split: test
path: atari-hero/test-*
- config_name: atari-icehockey
data_files:
- split: test
path: atari-icehockey/test-*
- config_name: atari-jamesbond
data_files:
- split: test
path: atari-jamesbond/test-*
- config_name: atari-kangaroo
data_files:
- split: test
path: atari-kangaroo/test-*
- config_name: atari-mspacman
data_files:
- split: test
path: atari-mspacman/test-*
- config_name: atari-namethisgame
data_files:
- split: test
path: atari-namethisgame/test-*
- config_name: atari-phoenix
data_files:
- split: test
path: atari-phoenix/test-*
- config_name: atari-qbert
data_files:
- split: test
path: atari-qbert/test-*
- config_name: atari-riverraid
data_files:
- split: test
path: atari-riverraid/test-*
- config_name: atari-roadrunner
data_files:
- split: test
path: atari-roadrunner/test-*
- config_name: atari-robotank
data_files:
- split: train
path: atari-robotank/train-*
- split: test
path: atari-robotank/test-*
- config_name: atari-seaquest
data_files:
- split: train
path: atari-seaquest/train-*
- split: test
path: atari-seaquest/test-*
- config_name: atari-skiing
data_files:
- split: train
path: atari-skiing/train-*
- split: test
path: atari-skiing/test-*
- config_name: atari-solaris
data_files:
- split: test
path: atari-solaris/test-*
- config_name: atari-spaceinvaders
data_files:
- split: test
path: atari-spaceinvaders/test-*
- config_name: atari-stargunner
data_files:
- split: test
path: atari-stargunner/test-*
- config_name: atari-surround
data_files:
- split: train
path: atari-surround/train-*
- split: test
path: atari-surround/test-*
- config_name: atari-tennis
data_files:
- split: test
path: atari-tennis/test-*
- config_name: atari-timepilot
data_files:
- split: test
path: atari-timepilot/test-*
- config_name: atari-tutankham
data_files:
- split: test
path: atari-tutankham/test-*
- config_name: atari-videopinball
data_files:
- split: train
path: atari-videopinball/train-*
- split: test
path: atari-videopinball/test-*
- config_name: atari-wizardofwor
data_files:
- split: train
path: atari-wizardofwor/train-*
- split: test
path: atari-wizardofwor/test-*
- config_name: atari-yarsrevenge
data_files:
- split: train
path: atari-yarsrevenge/train-*
- split: test
path: atari-yarsrevenge/test-*
- config_name: atari-zaxxon
data_files:
- split: train
path: atari-zaxxon/train-*
- split: test
path: atari-zaxxon/test-*
---
# Dataset Card for "gia-dataset-tokenized-2024-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manu/project_gutenberg | manu | 2023-09-07T15:33:32Z | 5,315 | 52 | [
"task_categories:text-generation",
"language:fr",
"language:en",
"language:zh",
"language:pt",
"language:pl",
"language:nl",
"language:ru",
"language:sv",
"language:it",
"language:de",
"language:es",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2023-09-07T14:14:10Z | 2 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: de
num_bytes: 1070196924
num_examples: 3131
- name: en
num_bytes: 25616345280
num_examples: 61340
- name: es
num_bytes: 496728508
num_examples: 1202
- name: fr
num_bytes: 2338871137
num_examples: 5493
- name: it
num_bytes: 383733486
num_examples: 1008
- name: nl
num_bytes: 504939551
num_examples: 1420
- name: pl
num_bytes: 4864460
num_examples: 34
- name: pt
num_bytes: 204058452
num_examples: 1111
- name: ru
num_bytes: 943593
num_examples: 6
- name: sv
num_bytes: 116664385
num_examples: 388
- name: zh
num_bytes: 174238359
num_examples: 437
download_size: 14399256761
dataset_size: 30911584135
task_categories:
- text-generation
language:
- fr
- en
- zh
- pt
- pl
- nl
- ru
- sv
- it
- de
- es
pretty_name: Project Gutenberg
size_categories:
- 10K<n<100K
---
# Dataset Card for "Project Gutenberg"
Project Gutenberg is a library of over 70,000 free eBooks, hosted at https://www.gutenberg.org/.
All examples correspond to a single book, and contain a header and a footer of a few lines (delimited by a *** Start of *** and *** End of *** tags).
### Usage
```python
from datasets import load_dataset
ds = load_dataset("manu/project_gutenberg", split="fr", streaming=True)
print(next(iter(ds)))
```
### License
Full license is available here:
https://www.gutenberg.org/policy/license.html
#### Summary
For nearly all uses, in nearly all parts of the world, the opening words of all of our eBooks apply: This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at [www.gutenberg.org]. If you are not located in the United States, you’ll have to check the laws of the country where you are located before using this ebook.”
##### Using the Project Gutenberg Trademark
If you want to use the name Project Gutenberg anywhere in the ebooks you distribute or on the distribution medium or in advertising you have to obey these rules:
- you may only distribute verbatim copies of the ebooks. No changes are allowed to the ebook contents. (Though reformatting the ebook to a different file format is considered okay).
- If you charge money for the copies you distribute, you have to pay royalties to Project Gutenberg.
- You must refund your clients for defective copies or if they don’t agree with the Project Gutenberg license.
If you don’t agree with any of the above mentioned restrictions, you may not use the Project Gutenberg trademark. You may still distribute the ebooks if you strip the Project Gutenberg license and all references to Project Gutenberg. |
ukr-models/Ukr-Synth | ukr-models | 2023-08-31T09:35:43Z | 83 | 13 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:uk",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"token-classification"
] | 2022-04-06T17:13:34Z | 1 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- uk
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- parsing
- part-of-speech
pretty_name: Ukrainian synthetic dataset in conllu format
---
# Dataset Card for Ukr-Synth
## Dataset Description
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
## Dataset Structure
### Data Splits
| name |train |validation|
|---------|-------:|---------:|
|conll2003|1000000| 10000|
## Dataset Creation
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
## Additional Information
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
bstee615/bigvul | bstee615 | 2023-08-31T03:02:50Z | 654 | 9 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-31T02:55:32Z | 2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: CVE ID
dtype: string
- name: CVE Page
dtype: string
- name: CWE ID
dtype: string
- name: codeLink
dtype: string
- name: commit_id
dtype: string
- name: commit_message
dtype: string
- name: func_after
dtype: string
- name: func_before
dtype: string
- name: lang
dtype: string
- name: project
dtype: string
- name: vul
dtype: int8
splits:
- name: train
num_bytes: 404950685.2579571
num_examples: 150908
- name: validation
num_bytes: 88684597.21877055
num_examples: 33049
- name: test
num_bytes: 88687280.64632414
num_examples: 33050
download_size: 252969708
dataset_size: 582322563.1230518
---
# Dataset Card for "bigvul"
Unofficial, not affiliated with the authors.
- **Paper:** https://doi.org/10.1145/3379597.3387501
- **Repository:** https://github.com/ZeoVan/MSR_20_Code_vulnerability_CSV_Dataset |
mlabonne/guanaco-llama2-1k | mlabonne | 2023-08-25T16:49:41Z | 9,145 | 157 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-23T15:07:50Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Guanaco-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the excellent [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.
|
pki/SecurityGPT | pki | 2023-08-25T13:10:29Z | 27 | 16 | [
"language:en",
"license:unknown",
"region:us"
] | [] | 2023-04-29T05:52:37Z | 1 | ---
license: unknown
language:
- en
pretty_name: SecurityGPT
---
Dataset for cybsec research Q&A fine tuning
Initial datasets incorporates results from below;
https://datasetsearch.research.google.com/search?src=0&query=cybersecurity&docid=L2cvMTFuX3hudnBtZw%3D%3D&filters=WyJbXCJsaWNlbnNlX2NsYXNzXCIsW1wiY29tbWVyY2lhbFwiXV0iXQ%3D%3D&property=bGljZW5zZV9jbGFzcw%3D%3D
Training when sufficient amount gathered, as of today prob based on Llama / Orca 8k token at 7b or 13b, decided later.
---
|
HuggingFaceM4/OBELICS | HuggingFaceM4 | 2023-08-22T20:50:09Z | 87,206 | 154 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.16527",
"region:us"
] | [] | 2023-05-30T23:06:14Z | null | ---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
pretty_name: OBELICS
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: opt_out_docs_removed_2023_07_12
data_files:
- split: train
path: opt_out_docs_removed_2023_07_12/train-*
dataset_info:
- config_name: default
features:
- name: images
sequence: string
- name: metadata
dtype: string
- name: general_metadata
dtype: string
- name: texts
sequence: string
splits:
- name: train
num_bytes: 715724717192
num_examples: 141047697
download_size: 71520629655
dataset_size: 715724717192
- config_name: opt_out_docs_removed_2023_07_12
features:
- name: images
sequence: string
- name: metadata
dtype: string
- name: general_metadata
dtype: string
- name: texts
sequence: string
splits:
- name: train
num_bytes: 684638314215
num_examples: 134648855
download_size: 266501092920
dataset_size: 684638314215
---
# Dataset Card for OBELICS
## Dataset Description
- **Visualization of OBELICS web documents:** https://huggingface.co/spaces/HuggingFaceM4/obelics_visualization
- **Paper:** [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents](https://arxiv.org/abs/2306.16527)
- **Repository:** https://github.com/huggingface/OBELICS
- **Point of Contact: [email protected]**
`OBELICS` is an open, massive, and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens, and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
We provide an [interactive visualization](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f) of OBELICS that allows exploring the content of OBELICS. The map shows a subset of 11M of the 141M documents.
[](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f)
## Data Fields
An example of a sample looks as follows:
```
# The example has been cropped
{
'images': [
'https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg',
None
],
'metadata': '[{"document_url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "unformatted_src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "formatted_filename": "lamborghini urus original carbon fiber accessories", "alt_text": "VW Group Allegedly Receives Offer To Sell Lamborghini For $9.2 Billion", "original_width": 1920, "original_height": 1080, "format": "jpeg"}, null]',
'general_metadata': '{"url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "warc_filename": "crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00312.warc.gz", "warc_record_offset": 322560850, "warc_record_length": 17143}',
'texts': [
None,
'The buyer would get everything, including Lambo\'s headquarters.\n\nThe investment groupQuantum Group AG has submitted a€7.5 billion ($9.2 billion at current exchange rates) offer to purchase Lamborghini from Volkswagen Group, Autocar reports. There\'s no info yet about whether VW intends to accept the offer or further negotiate the deal.\n\nQuantum ... Group Chief Executive Herbert Diess said at the time.'
]
}
```
Each sample is composed of the same 4 fields: `images`, `texts`, `metadata`, and `general_metadata`. `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`. For example, for the interleaved web document `<image_1>text<image_2>`, we would find `[image_1, None, image_2]` in `images` and `[None, text, None]` in `texts`.
The images are replaced by their URLs, and the users need to download the images, for instance, with the library [img2dataset](https://github.com/rom1504/img2dataset).
`metadata` is the string representation of a list containing information about each of the images. It has the same length as `texts` and `images` and logs for each image relevant information such as original source document, unformatted source, alternative text if present, etc.
`general_metadata` is the string representation of a dictionary containing the URL of the document, and information regarding the extraction from Common Crawl snapshots.
## Size and Data Splits
There is only one split, `train`, that contains 141,047,697 documents.
`OBELICS` with images replaced by their URLs weighs 666.6 GB (😈) in arrow format and 377 GB in the uploaded `parquet` format.
## Considerations for Using the Data
### Discussion of Biases
A subset of this dataset `train`, of ~50k was evaluated using the Data Measurements Tool, with a particular focus on the nPMI metric
> nPMI scores for a word help to identify potentially problematic associations, ranked by how close the association is.
> nPMI bias scores for paired words help to identify how word associations are skewed between the selected selected words (Aka et al., 2021).
> You can select from gender and sexual orientation identity terms that appear in the dataset at least 10 times.
> The resulting ranked words are those that co-occur with both identity terms.
> The more positive the score, the more associated the word is with the first identity term. The more negative the score, the more associated the word is with the second identity term.
While there was a positive skew of words relating occupations e.g _`government`_, _`jobs`_ towards she, her, and similar attributions of the masculine and feminine words to they and them, more harmful words attributions such as _`escort`_ and even _`colour`_ presented with greater attributions to she, her and him, his, respectively.

We welcome users to explore the [Data Measurements nPMI Visualitons for OBELICS](https://huggingface.co/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool) further and to see the [idefics-9b model card](https://huggingface.co/HuggingFaceM4/idefics-9b) for further Bias considerations.
## Opted-out content
To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
However, due to an error on our side, we did not remove entire documents (i.e., URLs) that opted out of AI model training. As of July 12, 2023, it represents 4.25% of the totality of OBELICS. The config `opt_out_docs_removed_2023_07_12` applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/OBELICS", "opt_out_docs_removed_2023_07_12")`.
We recommend users of OBELICS to regularly check every document against the API.
## Content warnings
Despite our efforts in filtering, OBELICS contains a small proportion of documents that are not suitable for all audiences. For instance, while navigating the interactive map, you might find the cluster named "Sex" which predominantly contains descriptions of pornographic movies along with pornographic images. Other clusters would contain advertising for sex workers or reports of violent shootings. In our experience, these documents represent a small proportion of all the documents.
## Terms of Use
By using the dataset, you agree to comply with the original licenses of the source content as well as the dataset license (CC-BY-4.0). Additionally, if you use this dataset to train a Machine Learning model, you agree to disclose your use of the dataset when releasing the model or an ML application using the model.
### Licensing Information
License CC-BY-4.0.
### Citation Information
If you are using this dataset, please cite
```
@misc{laurencon2023obelics,
title={OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
author={Hugo Laurençon and Lucile Saulnier and Léo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M. Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
year={2023},
eprint={2306.16527},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
mlfoundations/datacomp_1b | mlfoundations | 2023-08-21T21:43:05Z | 23,530 | 33 | [
"license:cc-by-4.0",
"size_categories:1B<n<10B",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-06-11T20:12:44Z | null | ---
license: cc-by-4.0
---
## DataComp-1B
This repository contains metadata files for DataComp-1B. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. |
mlfoundations/datacomp_xlarge | mlfoundations | 2023-08-21T21:42:38Z | 328,151 | 12 | [
"license:cc-by-4.0",
"size_categories:10B<n<100B",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-05-22T21:49:34Z | null | ---
license: cc-by-4.0
---
## DataComp XLarge Pool
This repository contains metadata files for the xlarge pool of DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. |
danjacobellis/HQMR | danjacobellis | 2023-08-18T10:34:14Z | 10,627 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-17T21:05:01Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 65460837045.38
num_examples: 177180
download_size: 66435478074
dataset_size: 65460837045.38
---
# Dataset Card for "HQMR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
duongttr/vi-dataset-for-pretrain | duongttr | 2023-08-02T09:38:30Z | 13,699 | 2 | [
"task_categories:text-generation",
"language:vi",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LM"
] | [
"text-generation"
] | 2023-08-02T08:20:06Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 77360702833
num_examples: 23891116
- name: validation
num_bytes: 4064634081
num_examples: 1257428
download_size: 2126869688
dataset_size: 81425336914
task_categories:
- text-generation
language:
- vi
size_categories:
- 10M<n<100M
tags:
- LM
---
# Dataset Card for "vi-dataset-for-pretrain"
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
The dataset consists of:
- [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
- [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
- [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
- [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
# Dataset info
| Splits | N.o examples | Size |
| --- | --- | --- |
| Train | 23,891,116 | 77.36 GB |
| Validation | 1,257,428 | 4.06 GB |
| **Total** | **25,148,544** | **81.43 GB** | |
mikex86/stackoverflow-posts | mikex86 | 2023-08-01T01:31:12Z | 6,151 | 53 | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:code",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"question-answering",
"text-generation",
"text2text-generation"
] | 2023-06-14T18:48:00Z | 3 | ---
license: other
language:
- code
- en
task_categories:
- question-answering
- text-generation
- text2text-generation
tags:
- code
viewer: true
pretty_name: StackOverflow Posts Markdown
size_categories:
- 10M<n<100M
---
# StackOverflow Posts Markdown

## Dataset Summary
This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as **Markdown text**.<br>
The dataset contains ~60 Million posts, totaling ~35GB in size and ~65 billion characters of text.<br>
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange).
## Dataset Structure
Each record corresponds to one post of a particular type.
Original ordering from the data dump is not exactly preserved due to parallelism in the script used to process the data dump.
The markdown content of each post is contained in the `Body` field. The license for a particular post is contained in the `ContentLicense` field.
### Data Fields
```typescript
{
Id: long,
PostTypeId: long, // 1=Question, 2=Answer, 3=Orphaned tag wiki, 4=Tag wiki excerpt, 5=Tag wiki, 6=Moderator nomination, 7=Wiki Placeholder, 8=Privilige Wiki
AcceptedAnswerId: long | null, // only present if PostTypeId=1
ParentId: long | null, // only present if PostTypeId=2
Score: long,
ViewCount: long | null,
Body: string | null,
Title: string | null,
ContentLicense: string | null,
FavoriteCount: long | null,
CreationDate: string | null,
LastActivityDate: string | null,
LastEditDate: string | null,
LastEditorUserId: long | null,
OwnerUserId: long | null,
Tags: array<string> | null
}
```
Also consider the [StackExchange Datadump Schema Documentation](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede), as all fields
have analogs in the original dump format.
## How to use?
```python
from datasets import load_dataset
# predownload full dataset
ds = load_dataset('mikex86/stackoverflow-posts', split='train')
# dataset streaming (will only download the data as needed)
ds = load_dataset('mikex86/stackoverflow-posts', split='train', streaming=True)
for sample in iter(ds): print(sample["Body"])
```
## How is the text stored?
The original Data Dump formats the "Body" field as HTML, using tags such as `<code>`, `<h1>`, `<ul>`, etc.
This HTML format has been converted to Markdown.
### Markdown format
For reference, [this post on StackOverflow](https://stackoverflow.com/questions/53253940/make-react-useeffect-hook-not-run-on-initial-render) is formatted as follows:
#### Title: Make React useEffect hook not run on initial render
```markdown
According to the docs:
> `componentDidUpdate()` is invoked immediately after updating occurs. This method is not called for the initial render.
We can use the new `useEffect()` hook to simulate `componentDidUpdate()`, but it seems like `useEffect()` is being ran after every render, even the first time. How do I get it to not run on initial render?
As you can see in the example below, `componentDidUpdateFunction` is printed during the initial render but `componentDidUpdateClass` was not printed during the initial render.
```
function ComponentDidUpdateFunction() {
const [count, setCount] = React.useState(0);
React.useEffect(() => {
console.log(""componentDidUpdateFunction"");
});
return (
<div>
<p>componentDidUpdateFunction: {count} times</p>
<button
onClick={() => {
setCount(count + 1);
}}
>
Click Me
</button>
</div>
);
}
```
rest of the post omitted for brevity
```
## Details on the HTML to Markdown conversion
Using Jsoup, the original Body field was converted into a Jsoup Document. The child **nodes** (has special meaning in context of Jsoup) of this document were recursively traversed in a depth-first order.
Jsoup defines `.text()` as follows:
> ... the normalized, combined text of this element and all its children. Whitespace is normalized and trimmed. For example, given HTML <code><p>Hello <b>there</b> now! </p><code>, p.text() returns "Hello there now!"
Jsoup defines a `Node` as follows:
> The base, abstract Node model. Elements, Documents, Comments etc are all Node instances.
Additionally the existence of the `TextNode` should be noted, which represents floating text inside an HTML document that is not itself an HTML element.
Thus this text tag `<p>Hello<code>World</code></p>` would have two Jsoup child nodes `TextNode(value="Hello")` and `Element(tag="code", value="World")`.
The value `field` of a `TextNode` contains the free standing text without any further treatment (no whitespace stripping, etc.)
### Traversing Rules
- When ecountering a html tag for which a rule exists, children are not further traversed, **unless explicitly stated otherwise**.
- When encountering an `<a>` tag, `[${element.text()}](${element.attr("href")})` is emitted.
- When encountering an `<h1>` tag, `\n# ${element.text()}\n\n` is emitted.
- When encountering an `<h2>` tag, `\n## ${element.text()}\n\n` is emitted.
- When encountering an `<h3>` tag, `\n### ${element.text()}\n\n` is emitted.
- When encountering an `<h4>` tag, `\n#### ${element.text()}\n\n` is emitted.
- When encountering an `<h5>` tag, `\n##### ${element.text()}\n\n` is emitted.
- When encountering an `<h6>` tag, `\n###### ${element.text()}\n\n` is emitted.
- When encountering a `<code>` tag, `` `${element.text()}` ``is emitted
- When encountering a `<pre>` tag and said element **has** a `<code>` child tag, `` ```\n${element.text()}`\n```\n`` is emitted.
- When encountering a `<pre>` tag and said element **does not** have a `<code>` child tag, **children are traversed further**.
- When encountering an `<li>` tag, `- ` is emitted and **children are traversed further**.
- When encountering a `<blockquote>` tag, `> ` is emitted and **children are traversed further**.
- When encountering an `<hr>` tag, `\n---\n\n` is emitted
- When encountering an `<img>` tag, `})` is emitted.
- When encountering a `<table>` tag
- `\n| ` is emitted
- For each element of `element.select("th")`
- `${element.text()} | ` is emitted
- After the loop `\n| ` is emitted
- For each element of `element.select("th")`
- For each character of the `th.text()`
- `-` is emitted
- After the loop over each character of th ` | ` is emitted
- `\n` is emitted
- For each element of `element.select("tr")` with more than one children of tag type `td`
- `| ` is emitted
- For each element of `element.select("td")`
- `${td.text()} | ` is emitted
- After the loop over `<td>` elements, `\n` is emitted
- After the loop over `<tr>` elements, `\n` is emitted
- When encountering a jsoup `TextNode`, `${node.attr(node.nodeName())}` (which is equivalent to accessing the private field `node.value`) is emitted. |
C-MTEB/BQ | C-MTEB | 2023-07-28T13:52:50Z | 14,363 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-28T13:52:31Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 8156338
num_examples: 100000
- name: validation
num_bytes: 812244
num_examples: 10000
- name: test
num_bytes: 815362
num_examples: 10000
download_size: 5588828
dataset_size: 9783944
---
# Dataset Card for "BQ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/OnlineShopping-classification | C-MTEB | 2023-07-28T13:15:20Z | 14,057 | 4 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-28T13:15:09Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: cat
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1535074.0115334373
num_examples: 8000
- name: test
num_bytes: 191884.25144167966
num_examples: 1000
download_size: 1139002
dataset_size: 1726958.262975117
---
# Dataset Card for "OnlineShopping-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davanstrien/MAMe2 | davanstrien | 2023-07-27T09:27:06Z | 48,900 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-26T11:20:15Z | null | ---
dataset_info:
config_name: '256'
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Albumen photograph
'1': Bronze
'2': Ceramic
'3': Clay
'4': Engraving
'5': Etching
'6': Faience
'7': Glass
'8': Gold
'9': Graphite
'10': Hand-colored engraving
'11': Hand-colored etching
'12': Iron
'13': Ivory
'14': Limestone
'15': Lithograph
'16': Marble
'17': Oil on canvas
'18': Pen and brown ink
'19': Polychromed wood
'20': Porcelain
'21': Silk and metal thread
'22': Silver
'23': Steel
'24': Wood
'25': Wood engraving
'26': Woodblock
'27': Woodcut
'28': Woven fabric
- name: Museum
dtype: string
- name: Museum-based instance ID
dtype: string
- name: Width
dtype: float32
- name: Height
dtype: float32
- name: Product size
dtype: float32
- name: Aspect ratio
dtype: float32
splits:
- name: train
num_bytes: 441294458.5
num_examples: 20300
- name: validation
num_bytes: 26810584.95
num_examples: 1450
- name: test
num_bytes: 362018531.291
num_examples: 15657
download_size: 723376699
dataset_size: 830123574.7409999
configs:
- config_name: '256'
data_files:
- split: train
path: 256/train-*
- split: validation
path: 256/validation-*
- split: test
path: 256/test-*
---
# Dataset Card for "MAMe2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rdpahalavan/CIC-IDS2017 | rdpahalavan | 2023-07-22T21:42:04Z | 6,573 | 2 | [
"task_categories:text-classification",
"task_categories:tabular-classification",
"license:apache-2.0",
"size_categories:100M<n<1B",
"region:us",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"CIC-IDS2017"
] | [
"text-classification",
"tabular-classification"
] | 2023-07-08T07:25:54Z | 1 | ---
license: apache-2.0
task_categories:
- text-classification
- tabular-classification
size_categories:
- 100M<n<1B
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
- CIC-IDS2017
---
We have developed a Python package as a wrapper around Hugging Face Hub and Hugging Face Datasets library to access this dataset easily.
# NIDS Datasets
The `nids-datasets` package provides functionality to download and utilize specially curated and extracted datasets from the original UNSW-NB15 and CIC-IDS2017 datasets. These datasets, which initially were only flow datasets, have been enhanced to include packet-level information from the raw PCAP files. The dataset contains both packet-level and flow-level data for over 230 million packets, with 179 million packets from UNSW-NB15 and 54 million packets from CIC-IDS2017.
## Installation
Install the `nids-datasets` package using pip:
```shell
pip install nids-datasets
```
Import the package in your Python script:
```python
from nids_datasets import Dataset, DatasetInfo
```
## Dataset Information
The `nids-datasets` package currently supports two datasets: [UNSW-NB15](https://research.unsw.edu.au/projects/unsw-nb15-dataset) and [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html). Each of these datasets contains a mix of normal traffic and different types of attack traffic, which are identified by their respective labels. The UNSW-NB15 dataset has 10 unique class labels, and the CIC-IDS2017 dataset has 24 unique class labels.
- UNSW-NB15 Labels: 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis'
- CIC-IDS2017 Labels: 'BENIGN', 'FTP-Patator', 'SSH-Patator', 'DoS slowloris', 'DoS Slowhttptest', 'DoS Hulk', 'Heartbleed', 'Web Attack – Brute Force', 'Web Attack – XSS', 'Web Attack – SQL Injection', 'Infiltration', 'Bot', 'PortScan', 'DDoS', 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis', 'DoS GoldenEye'
## Subsets of the Dataset
Each dataset consists of four subsets:
1. Network-Flows - Contains flow-level data.
2. Packet-Fields - Contains packet header information.
3. Packet-Bytes - Contains packet byte information in the range (0-255).
4. Payload-Bytes - Contains payload byte information in the range (0-255).
Each subset contains 18 files (except Network-Flows, which has one file), where the data is stored in parquet format. In total, this package provides access to 110 files. You can choose to download all subsets or select specific subsets or specific files depending on your analysis requirements.
## Getting Information on the Datasets
The `DatasetInfo` function provides a summary of the dataset in a pandas dataframe format. It displays the number of packets for each class label across all 18 files in the dataset. This overview can guide you in selecting specific files for download and analysis.
```python
df = DatasetInfo(dataset='UNSW-NB15') # or dataset='CIC-IDS2017'
df
```
## Downloading the Datasets
The `Dataset` class allows you to specify the dataset, subset, and files that you are interested in. The specified data will then be downloaded.
```python
dataset = 'UNSW-NB15' # or 'CIC-IDS2017'
subset = ['Network-Flows', 'Packet-Fields', 'Payload-Bytes'] # or 'all' for all subsets
files = [3, 5, 10] # or 'all' for all files
data = Dataset(dataset=dataset, subset=subset, files=files)
data.download()
```
The directory structure after downloading files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
└───Payload-Bytes
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
You can then load the parquet files using pandas:
```python
import pandas as pd
df = pd.read_parquet('UNSW-NB15/Packet-Fields/Packet_Fields_File_10.parquet')
```
## Merging Subsets
The `merge()` method allows you to merge all data of each packet across all subsets, providing both flow-level and packet-level information in a single file.
```python
data.merge()
```
The merge method, by default, uses the details specified while instantiating the `Dataset` class. You can also pass subset=list of subsets and files=list of files you want to merge.
The directory structure after merging files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
└───Network-Flows+Packet-Fields+Payload-Bytes
├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
└───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
```
## Extracting Bytes
Packet-Bytes and Payload-Bytes subset contains the first 1500-1600 bytes. To retrieve all bytes (up to 65535 bytes) from the Packet-Bytes and Payload-Bytes subsets, use the `Bytes()` method. This function requires files in the Packet-Fields subset to operate. You can specify how many bytes you want to extract by passing the max_bytes parameter.
```python
data.bytes(payload=True, max_bytes=2500)
```
Use packet=True to extract packet bytes. You can also pass files=list of files to retrieve bytes.
The directory structure after extracting bytes:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
├───Network-Flows+Packet-Fields+Payload-Bytes
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
│ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
│
└───Payload-Bytes-2500
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
## Reading the Datasets
The `read()` method allows you to read files using Hugging Face's `load_dataset` method, one subset at a time. The dataset and files parameters are optional if the same details are used to instantiate the `Dataset` class.
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2])
```
The `read()` method returns a dataset that you can convert to a pandas dataframe or save to a CSV, parquet, or any other desired file format:
```python
df = dataset.to_pandas()
dataset.to_csv('file_path_to_save.csv')
dataset.to_parquet('file_path_to_save.parquet')
```
For scenarios where you want to process one packet at a time, you can use the `stream=True` parameter:
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2], stream=True)
print(next(iter(dataset)))
```
## Notes
The size of these datasets is large, and depending on the subset(s) selected and the number of bytes extracted, the operations can be resource-intensive. Therefore, it's recommended to ensure you have sufficient disk space and RAM when using this package. |
lavita/medical-qa-shared-task-v1-toy | lavita | 2023-07-20T00:29:06Z | 906,796 | 18 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-20T00:28:51Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: ending4
dtype: string
- name: label
dtype: int64
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: startphrase
dtype: string
splits:
- name: train
num_bytes: 52480.01886421694
num_examples: 32
- name: dev
num_bytes: 52490.64150943396
num_examples: 32
download_size: 89680
dataset_size: 104970.6603736509
---
# Dataset Card for "medical-qa-shared-task-v1-toy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
osunlp/Mind2Web | osunlp | 2023-07-19T03:44:34Z | 616 | 102 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2306.06070",
"region:us",
"Web Agent"
] | [] | 2023-06-10T02:38:11Z | null | ---
license: cc-by-4.0
language:
- en
tags:
- Web Agent
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://osu-nlp-group.github.io/Mind2Web/
- **Repository:** https://github.com/OSU-NLP-Group/Mind2Web
- **Paper:** https://arxiv.org/abs/2306.06070
- **Point of Contact:** [Xiang Deng](mailto:[email protected])
### Dataset Summary
Mind2Web is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website. Existing datasets for web agents either use simulated websites or only cover a limited set of websites and tasks, thus not suitable for generalist web agents. With over 2,000 open-ended tasks collected from 137 websites spanning 31 domains and crowdsourced action sequences for the tasks, Mind2Web provides three necessary ingredients for building generalist web agents: 1. diverse domains, websites, and tasks, 2. use of real-world websites instead of simulated and simplified ones, and 3. a broad spectrum of user interaction patterns.
## Dataset Structure
### Data Fields
- "annotation_id" (str): unique id for each task
- "website" (str): website name
- "domain" (str): website domain
- "subdomain" (str): website subdomain
- "confirmed_task" (str): task description
- "action_reprs" (list[str]): human readable string representation of the action sequence
- "actions" (list[dict]): list of actions (steps) to complete the task
- "action_uid" (str): unique id for each action (step)
- "raw_html" (str): raw html of the page before the action is performed
- "cleaned_html" (str): cleaned html of the page before the action is performed
- "operation" (dict): operation to perform
- "op" (str): operation type, one of CLICK, TYPE, SELECT
- "original_op" (str): original operation type, contain additional HOVER and ENTER that are mapped to CLICK, not used
- "value" (str): optional value for the operation, e.g., text to type, option to select
- "pos_candidates" (list[dict]): ground truth elements. Here we only include positive elements that exist in "cleaned_html" after our preprocessing, so "pos_candidates" might be empty. The original labeled element can always be found in the "raw_html".
- "tag" (str): tag of the element
- "is_original_target" (bool): whether the element is the original target labeled by the annotator
- "is_top_level_target" (bool): whether the element is a top level target find by our algorithm. please see the paper for more details.
- "backend_node_id" (str): unique id for the element
- "attributes" (str): serialized attributes of the element, use `json.loads` to convert back to dict
- "neg_candidates" (list[dict]): other candidate elements in the page after preprocessing, has similar structure as "pos_candidates"
### Data Splits
- train: 1,009 instances
- test: (To prevent potential data leakage, please check our [repo](https://github.com/OSU-NLP-Group/Mind2Web) for information on obtaining the test set.)
- Cross Task: 252 instances, tasks from the same website are seen during training
- Cross Website: 177 instances, websites are not seen during training
- Cross Domain: 9,12 instances, entire domains are not seen during training
### Licensing Information
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
### Disclaimer
This dataset was collected and released solely for research purposes, with the goal of making the web more accessible via language technologies. The authors are strongly against any potential harmful use of the data or technology to any party.
### Citation Information
```
@misc{deng2023mind2web,
title={Mind2Web: Towards a Generalist Agent for the Web},
author={Xiang Deng and Yu Gu and Boyuan Zheng and Shijie Chen and Samuel Stevens and Boshi Wang and Huan Sun and Yu Su},
year={2023},
eprint={2306.06070},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
danasone/librusec | danasone | 2023-07-13T08:59:22Z | 13,549 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-13T06:53:59Z | null | ---
dataset_info:
features:
- name: id
dtype: uint64
- name: text
dtype: string
splits:
- name: train
num_bytes: 119853827612
num_examples: 212795
download_size: 31530091183
dataset_size: 119853827612
---
# Dataset Card for "librusec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ai-habitat/hab_stretch | ai-habitat | 2023-07-12T01:05:30Z | 50 | 1 | [
"license:other",
"region:us"
] | [] | 2023-06-16T00:01:15Z | 1 | ---
license: other
pretty_name: Habitat Stretch Robot
viewer: false
---

# Hello Robot Stretch
Simulation model (URDF) of Hello Robot Stretch for use in [habitat-sim](https://github.com/facebookresearch/habitat-sim).
## License Information
See LICENSE.txt for more details.
```
Original "urdf/hab_stretch.urdf" and all assets referenced there-in are provided courtesy of Hello Robot, all rights reserved.
All other assets represent derivative work of said authors.
Written permission has been acquired for redistribution of these assets with attribution.
``` |
cerebras/SlimPajama-627B | cerebras | 2023-07-07T23:13:12Z | 33,679 | 461 | [
"task_categories:text-generation",
"language:en",
"arxiv:2306.01116",
"arxiv:2302.13971",
"region:us"
] | [
"text-generation"
] | 2023-06-07T18:45:02Z | null | ---
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama-627B
---
## Dataset Description
- **Homepage:** [SlimPajama Blog](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama)
- **Repository:** [Pre-Processing Libraries](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama)
- **Size of compressed dataset:** 895 GB
The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of [Together's RedPajama](https://github.com/togethercomputer/redpajama-data).
Check out our [blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) explaining our methods, [our code on GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama), and join the discussion on the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
## Getting Started
You can download the dataset using Hugging Face datasets:
```python
from datasets import load_dataset
ds = load_dataset("cerebras/SlimPajama-627B")
```
## Background
Today we are releasing SlimPajama – the largest extensively deduplicated, multi-corpora, open-source dataset for training large language models. SlimPajama was created by cleaning and deduplicating the 1.2T token RedPajama dataset from Together. By filtering out low quality data and duplicates, we were able to remove 49.6% of bytes, slimming down the dataset from 1210B to 627B tokens. We believe SlimPajama offers the highest quality and most compute efficient data to train on for runs up to 627B tokens. When upsampled, we expect SlimPajama to perform equal to or better than RedPajama-1T when training at trillion token scale.
In addition to the data, we are also releasing the tools we built to create SlimPajama. Applying [MinHashLSH](http://infolab.stanford.edu/~ullman/mmds/book0n.pdf) deduplication to trillion token datasets like RedPajama was not possible with off-the-shelf open-source code. We made several improvements to existing solutions to produce an infrastructure that can perform MinHashLSH deduplication on trillion token datasets in a distributed, multi-threaded, and memory efficient fashion. Today we are open-sourcing this infrastructure to enable the community to easily create higher quality, extensively deduplicated datasets in the future.
### Our contributions
1. SlimPajama 627B – the largest extensively deduplicated, multi-corpora, open dataset for LLM training. We release it under the Apache 2.0 license.
2. Releasing validation and test sets, 500M tokens each, which has been decontaminated against the training data.
3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open-source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.
The full set of scripts to recreate the dataset from the original RedPajama dataset are available on the [Cerebras GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama). A deeper explanation of our cleaning and deduplication process can be found in the [SlimPajama blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama).
## Dataset Summary
The [latest research](https://arxiv.org/abs/2306.01116) has shown that data quality is as important as data quantity. While training on more than one data epoch can be beneficial, this should be a choice rather than a side-effect of duplicates in the dataset. We decided to extensively deduplicate RedPajama to produce a dataset with higher information density. This means when using SlimPajama, you can achieve higher accuracy with the same compute budget when compared to other datasets.
#### Comparison of dataset features
| Data source | Tokens | Open Source | Curated Data Sources | Deduplication Level |
| --------------- | ------- | ----------- | -------------------- | ------------------- |
| SlimPajama | **627B**| **Yes** | **Yes** | **Extensive** |
| RedPajama | 1.21T | **Yes** | **Yes** | Partial |
| RefinedWeb-600B | 600B | **Yes** | No | **Extensive** |
| RefinedWeb-5T | **5T** | No | No | **Extensive** |
| LLaMA | 1.4T | No | **Yes** | Partial |
| MPT | 1T | No | **Yes** | Partial |
| MassiveText | 1.4T | No | **Yes** | **Extensive** |
#### Document low-length filter rates
| Data source | Document low-length filter rate |
| ------------- | ------------------------------- |
| Commoncrawl | 0.02% |
| C4 | 4.70% |
| GitHub | 0.00% |
| Books | 0.00% |
| ArXiv | 0.62% |
| Wikpedia | 0.00% |
| StackExchange | 0.32% |
| Total | 1.86% |
#### Data source byte deduplication rates
| Data source | Byte deduplication rate |
| ------------- | ---------------------- |
| Commoncrawl | 63.76% |
| C4 | 6.85% |
| GitHub | 46.16% |
| Books | 2.01% |
| ArXiv | 0.06% |
| Wikipedia | 2.24% |
| StackExchange | 0.20% |
| Total | 49.60% |
#### Data source proportions for SlimPajama and RedPajama
| Data source | SlimPajama | RedPajama |
| ------------- | ---------- | --------- |
| Commoncrawl | 52.2% | 72.6% |
| C4 | 26.7% | 14.4% |
| GitHub | 5.2% | 4.9% |
| Books | 4.2% | 2.1% |
| ArXiv | 4.6% | 2.3% |
| Wikpedia | 3.8% | 2.0% |
| StackExchange | 3.3% | 1.7% |
### Languages
Primarily English, with some non-English files in Wikipedia.
### Dataset Structure
The dataset consists of jsonl files, with structure as follows:
```json
{
"text": ...,
"meta": {"redpajama_set_name": "RedPajamaCommonCrawl" | "RedPajamaC4" | "RedPajamaGithub" | "RedPajamaBook" | "RedPajamaArXiv" | "RedPajamaWikipedia" | "RedPajamaStackExchange"},
}
```
### Dataset Creation
SlimPajama was created by cleaning and deduplicating the [RedPajama dataset from Together](https://github.com/togethercomputer/redpajama-data) via MinHashLSH. RedPajama is an open-source reproduction of the [LLaMA](https://arxiv.org/abs/2302.13971) data collection methodology.
### Source Data
The data sources composing RedPajama are explained in [its model card](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
To cite SlimPajama, please use:
```
@misc{cerebras2023slimpajama,
author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
month = June,
year = 2023,
howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B},
}
```
## License
Please refer to the licenses of the data subsets you use.
- [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
- [C4 license](https://huggingface.co/datasets/allenai/c4#license)
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
- [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
- [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
- [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
## Acknowledgements
- We’d like to thank Together, Ontocord.ai, ETH DS3Lab , AAI CERC Lab for creating the original RedPajama dataset and releasing it open source.
- This release was made possible with the support and collaboration of Opentensor.
- Easy cloud access to Cerebras systems is provided by our partner Cirrascale. |
liuhaotian/LLaVA-CC3M-Pretrain-595K | liuhaotian | 2023-07-06T08:51:35Z | 789 | 148 | [
"language:en",
"license:other",
"modality:image",
"region:us"
] | [] | 2023-04-20T14:28:12Z | null | ---
license: other
language:
- en
pretty_name: LLaVA CC3M Pretrain 595K
---
# LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023.
**Dataset structure:**
- `chat.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `metadata.json` contains the meta data of the image index in CC-3M, image file name, image URL, original CC-3M caption, synthetic BLIP caption. Note that ~10% of the samples are not associated with BLIP caption yet in this release.
- `images.zip` contains all raw images of the filtered subset from CC-3M. **Important notice: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload `images.zip` for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the CC-3M license. This may be taken down when requested by the original CC-3M dataset owner or owners of the referenced images.**
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
zzliang/GRIT | zzliang | 2023-07-04T06:40:28Z | 396 | 146 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:object-detection",
"task_categories:zero-shot-classification",
"task_ids:image-captioning",
"task_ids:visual-question-answering",
"multilinguality:monolingual",
"source_datasets:COYO-700M",
"language:en",
"license:ms-pl",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.14824",
"region:us",
"image-text-bounding-box pairs",
"image-text pairs"
] | [
"text-to-image",
"image-to-text",
"object-detection",
"zero-shot-classification"
] | 2023-07-04T03:33:28Z | null | ---
license: ms-pl
language:
- en
multilinguality:
- monolingual
pretty_name: GRIT
size_categories:
- 100M<n<1B
source_datasets:
- COYO-700M
tags:
- image-text-bounding-box pairs
- image-text pairs
task_categories:
- text-to-image
- image-to-text
- object-detection
- zero-shot-classification
task_ids:
- image-captioning
- visual-question-answering
---
# GRIT: Large-Scale Training Corpus of Grounded Image-Text Pairs
### Dataset Description
- **Repository:** [Microsoft unilm](https://github.com/microsoft/unilm/tree/master/kosmos-2)
- **Paper:** [Kosmos-2](https://arxiv.org/abs/2306.14824)
### Dataset Summary
We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from [COYO-700M](https://github.com/kakaobrain/coyo-dataset) and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the [paper](https://arxiv.org/abs/2306.14824).
### Supported Tasks
During the construction, we excluded the image-caption pairs if no bounding boxes are retained. This procedure resulted in a high-quality image-caption subset of COYO-700M, which we will validate in the future.
Furthermore, this dataset contains text-span-bounding-box pairs. Thus, it can be used in many location-aware mono/multimodal tasks, such as phrase grounding, referring expression comprehension, referring expression generation, and open-world object detection.
### Data Instance
One instance is
```python
{
'key': '000373938',
'clip_similarity_vitb32': 0.353271484375,
'clip_similarity_vitl14': 0.2958984375,
'id': 1795296605919,
'url': "https://www.thestrapsaver.com/wp-content/uploads/customerservice-1.jpg",
'caption': 'a wire hanger with a paper cover that reads we heart our customers',
'width': 1024,
'height': 693,
'noun_chunks': [[19, 32, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 13, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]],
'ref_exps': [[19, 66, 0.019644069503434333, 0.31054004033406574, 0.9622142865754519, 0.9603442351023356, 0.79298526], [0, 66, 0.019422357885505368, 0.027634161214033764, 0.9593302408854166, 0.969467560450236, 0.67520964]]
}
```
- `key`: The generated file name when using img2dataset to download COYO-700M (omit it).
- `clip_similarity_vitb32`: The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M.
- `clip_similarity_vitl14`: The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP), provided by COYO-700M.
- `id`: Unique 64-bit integer ID in COYO-700M.
- `url`: The image URL.
- `caption`: The corresponding caption.
- `width`: The width of the image.
- `height`: The height of the image.
- `noun_chunks`: The noun chunks (extracted by [spaCy](https://spacy.io/)) that have associated bounding boxes (predicted by [GLIP](https://github.com/microsoft/GLIP)). The items in the children list respectively represent 'Start of the noun chunk in caption', 'End of the noun chunk in caption', 'normalized x_min', 'normalized y_min', 'normalized x_max', 'normalized y_max', 'confidence score'.
- `ref_exps`: The corresponding referring expressions. If a noun chunk has no expansion, we just copy it.
### Download image
We recommend to use [img2dataset](https://github.com/rom1504/img2dataset) tool to download the images.
1. Download the metadata. You can download it by cloning current repository:
```bash
git lfs install
git clone https://huggingface.co/datasets/zzliang/GRIT
```
2. Install [img2dataset](https://github.com/rom1504/img2dataset).
```bash
pip install img2dataset
```
3. Download images
You need to replace `/path/to/GRIT_dataset/grit-20m` with the local path to this repository.
```bash
img2dataset --url_list /path/to/GRIT_dataset/grit-20m --input_format "parquet"\
--url_col "url" --caption_col "caption" --output_format webdataset \
--output_folder /tmp/grit --processes_count 4 --thread_count 64 --image_size 256 \
--resize_only_if_bigger=True --resize_mode="keep_ratio" --skip_reencode=True \
--save_additional_columns '["id","noun_chunks","ref_exps","clip_similarity_vitb32","clip_similarity_vitl14"]' \
--enable_wandb False
```
You can adjust some parameters according to your actual needs (e.g., `processes_count`, `thread_count`, `image_size`, `save_additional_columns`).
More img2dataset hyper-parameters can be found in [here](https://github.com/rom1504/img2dataset#api).
### Citation Information
If you apply this dataset to any project and research, please cite our paper and coyo-700m:
```
@article{Kosmos2,
title={Kosmos-2: Grounding Multimodal Large Language Models to the World},
author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei},
journal={ArXiv},
year={2023},
volume={abs/2306.14824}
}
@misc{kakaobrain2022coyo-700m,
title = {COYO-700M: Image-Text Pair Dataset},
author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim},
year = {2022},
howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}},
}
``` |
ai4privacy/pii-masking-43k | ai4privacy | 2023-06-28T17:45:58Z | 69 | 18 | [
"language:en",
"size_categories:10K<n<100K",
"doi:10.57967/hf/0824",
"region:us",
"legal",
"business",
"psychology",
"privacy"
] | [] | 2023-06-28T16:44:41Z | 1 | ---
language:
- en
tags:
- legal
- business
- psychology
- privacy
size_categories:
- 10K<n<100K
---
# Purpose and Features
The purpose of the model and dataset is to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs.
The model is a fine-tuned version of "Distilled BERT", a smaller and faster version of BERT. It was adapted for the task of token classification based on the largest to our knowledge open-source PII masking dataset, which we are releasing simultaneously. The model size is 62 million parameters. The original encoding of the parameters yields a model size of 268 MB, which is compressed to 43MB after parameter quantization. The models are available in PyTorch, tensorflow, and tensorflow.js
The dataset is composed of ~43’000 observations. Each row starts with a natural language sentence that includes placeholders for PII and could plausibly be written to an AI assistant. The placeholders are then filled in with mocked personal information and tokenized with the BERT tokenizer. We label the tokens that correspond to PII, serving as the ground truth to train our model.
The dataset covers a range of contexts in which PII can appear. The sentences span 54 sensitive data types (~111 token classes), targeting 125 discussion subjects / use cases split across business, psychology and legal fields, and 5 interactions styles (e.g. casual conversation vs formal document).
Key facts:
- Currently 5.6m tokens with 43k PII examples.
- Scaling to 100k examples
- Human-in-the-loop validated
- Synthetic data generated using proprietary algorithms
- Adapted from DistilBertForTokenClassification
- Framework PyTorch
- 8 bit quantization
# Performance evaluation
| Test Precision | Test Recall | Test Accuracy |
|:-:|:-:|:-:|
| 0.998636 | 0.998945 | 0.994621 |
Training/Test Set split:
- 4300 Testing Examples (10%)
- 38700 Train Examples
# Community Engagement:
Newsletter & updates: www.Ai4privacy.com
- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)
- Integrations with already existing open source solutions
# Roadmap and Future Development
- Multilingual
- Extended integrations
- Continuously increase the training set
- Further optimisation to the model to reduce size and increase generalisability
- Next released major update is planned for the 14th of July (subscribe to newsletter for updates)
# Use Cases and Applications
**Chatbots**: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.
**Customer Support Systems**: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.
**Email Filtering**: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.
**Data Anonymization**: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.
**Social Media Platforms**: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.
**Content Moderation**: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.
**Online Forms**: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.
**Collaborative Document Editing**: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.
**Research and Data Sharing**: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.
**Content Generation**: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.
(...and whatever else your creative mind can think of)
# Support and Maintenance
AI4Privacy is a project affiliated with [AISuisse SA](https://www.aisuisse.com/). |
jxie/flickr8k | jxie | 2023-06-25T22:25:03Z | 785 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-06-25T19:09:16Z | 1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption_0
dtype: string
- name: caption_1
dtype: string
- name: caption_2
dtype: string
- name: caption_3
dtype: string
- name: caption_4
dtype: string
splits:
- name: train
num_bytes: 826721431.0
num_examples: 6000
- name: validation
num_bytes: 138017615.0
num_examples: 1000
- name: test
num_bytes: 136871307.0
num_examples: 1000
download_size: 274629589
dataset_size: 1101610353.0
---
# Dataset Card for "flickr8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
camel-ai/math | camel-ai | 2023-06-22T21:59:52Z | 254 | 108 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"modality:text",
"arxiv:2303.17760",
"region:us",
"instruction-finetuning"
] | [
"text-generation"
] | 2023-04-10T22:00:46Z | null | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Math
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Math dataset is composed of 50K problem-solution pairs obtained using GPT-4. The dataset problem-solutions pairs generating from 25 math topics, 25 subtopics for each topic and 80 problems for each "topic,subtopic" pairs.
We provide the data in `math50k.zip`.
## Data Fields
**The data fields for files in `math50k.zip` are as follows:**
* `role_1`: assistant role
* `topic`: math topic
* `sub_topic`: math subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
Note: File naming refers to {`topic_index`}\_{`subtopic_index`}\_{`problem_number`}.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/math", repo_type="dataset", filename="math50k.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
tasksource/logiqa-2.0-nli | tasksource | 2023-06-22T14:06:42Z | 51 | 3 | [
"task_ids:natural-language-inference",
"language:en",
"license:cc",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2007.08124",
"region:us"
] | [] | 2023-04-24T15:05:37Z | 1 | ---
license: cc
language:
- en
task_ids:
- natural-language-inference
---
https://github.com/csitfun/LogiQA2.0
Temporary citation:
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
``` |
jxie/camelyon17 | jxie | 2023-06-22T09:10:17Z | 16,681 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-06-20T19:19:23Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: id_train
num_bytes: 1028118482.46
num_examples: 302436
- name: id_val
num_bytes: 114778024.28
num_examples: 33560
- name: unlabeled_train
num_bytes: 2167898085.29
num_examples: 600030
- name: ood_val
num_bytes: 129021135.128
num_examples: 34904
- name: ood_test
num_bytes: 276517018.354
num_examples: 85054
download_size: 2858780601
dataset_size: 3716332745.5119996
---
# Dataset Card for "camelyon17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tasksource/crowdflower | tasksource | 2023-06-21T12:50:08Z | 100 | 1 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:en",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: ethics
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
- fact-checking
---
```
@inproceedings{van2012designing,
title={Designing a scalable crowdsourcing platform},
author={Van Pelt, Chris and Sorokin, Alex},
booktitle={Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data},
pages={765--766},
year={2012}
}
``` |
DecisionOptimizationSystem/ForecastingDataStockDaily | DecisionOptimizationSystem | 2023-06-20T08:58:41Z | 229 | 2 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-06-20T08:40:28Z | 1 | ---
dataset_info:
features:
- name: date
dtype: string
- name: target
dtype: float64
- name: context_id
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1359456927
num_examples: 23863396
download_size: 394118870
dataset_size: 1359456927
---
# Dataset Card for "ForecastingDataStockDaily"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RussianNLP/russian_super_glue | RussianNLP | 2023-06-19T12:23:49Z | 457 | 33 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"task_ids:natural-language-inference",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:ru",
"license:mit",
"size_categories:100K<n<1M",
"arxiv:2202.07791",
"region:us",
"glue",
"qa",
"superGLUE",
"NLI",
"reasoning"
] | [
"text-classification",
"question-answering",
"zero-shot-classification",
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ru
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-classification
- question-answering
- zero-shot-classification
- text-generation
task_ids:
- natural-language-inference
- multi-class-classification
pretty_name: Russian SuperGLUE
language_bcp47:
- ru-RU
dataset_info:
- config_name: lidirus
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: knowledge
dtype: string
- name: lexical-semantics
dtype: string
- name: logic
dtype: string
- name: predicate-argument-structure
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: test
num_bytes: 470306
num_examples: 1104
download_size: 47118
dataset_size: 470306
- config_name: rcb
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: verb
dtype: string
- name: negation
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': contradiction
'2': neutral
splits:
- name: train
num_bytes: 199712
num_examples: 438
- name: validation
num_bytes: 97993
num_examples: 220
- name: test
num_bytes: 207031
num_examples: 438
download_size: 136700
dataset_size: 504736
- config_name: parus
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': choice1
'1': choice2
splits:
- name: train
num_bytes: 74467
num_examples: 400
- name: validation
num_bytes: 19397
num_examples: 100
- name: test
num_bytes: 93192
num_examples: 500
download_size: 57585
dataset_size: 187056
- config_name: muserc
features:
- name: paragraph
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: idx
struct:
- name: paragraph
dtype: int32
- name: question
dtype: int32
- name: answer
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 31651155
num_examples: 11950
- name: validation
num_bytes: 5964157
num_examples: 2235
- name: test
num_bytes: 19850930
num_examples: 7614
download_size: 1196720
dataset_size: 57466242
- config_name: terra
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
splits:
- name: train
num_bytes: 1409243
num_examples: 2616
- name: validation
num_bytes: 161485
num_examples: 307
- name: test
num_bytes: 1713499
num_examples: 3198
download_size: 907346
dataset_size: 3284227
- config_name: russe
features:
- name: word
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: start1
dtype: int32
- name: start2
dtype: int32
- name: end1
dtype: int32
- name: end2
dtype: int32
- name: gold_sense1
dtype: int32
- name: gold_sense2
dtype: int32
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 6913280
num_examples: 19845
- name: validation
num_bytes: 2957491
num_examples: 8505
- name: test
num_bytes: 10046000
num_examples: 18892
download_size: 3806009
dataset_size: 19916771
- config_name: rwsd
features:
- name: text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 132274
num_examples: 606
- name: validation
num_bytes: 87959
num_examples: 204
- name: test
num_bytes: 59051
num_examples: 154
download_size: 40508
dataset_size: 279284
- config_name: danetqa
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: idx
dtype: int32
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 2474006
num_examples: 1749
- name: validation
num_bytes: 1076455
num_examples: 821
- name: test
num_bytes: 1023062
num_examples: 805
download_size: 1293761
dataset_size: 4573523
- config_name: rucos
features:
- name: passage
dtype: string
- name: query
dtype: string
- name: entities
sequence: string
- name: answers
sequence: string
- name: idx
struct:
- name: passage
dtype: int32
- name: query
dtype: int32
splits:
- name: train
num_bytes: 160095378
num_examples: 72193
- name: validation
num_bytes: 16980563
num_examples: 7577
- name: test
num_bytes: 15535209
num_examples: 7257
download_size: 56208297
dataset_size: 192611150
tags:
- glue
- qa
- superGLUE
- NLI
- reasoning
---
# Dataset Card for [Russian SuperGLUE]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://russiansuperglue.com/
- **Repository:** https://github.com/RussianNLP/RussianSuperGLUE
- **Paper:** https://russiansuperglue.com/download/main_article
- **Leaderboard:** https://russiansuperglue.com/leaderboard/2
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Modern universal language models and transformers such as BERT, ELMo, XLNet, RoBERTa and others need to be properly
compared and evaluated. In the last year, new models and methods for pretraining and transfer learning have driven
striking performance improvements across a range of language understanding tasks.
We offer testing methodology based on tasks, typically proposed for “strong AI” — logic, commonsense, reasoning.
Adhering to the GLUE and SuperGLUE methodology, we present a set of test tasks for general language understanding
and leaderboard models.
For the first time a complete test for Russian language was developed, which is similar to its English analog.
Many datasets were composed for the first time, and a leaderboard of models for the Russian language with comparable
results is also presented.
### Supported Tasks and Leaderboards
Supported tasks, barring a few additions, are equivalent to the original SuperGLUE tasks.
|Task Name|Equiv. to|
|----|---:|
|Linguistic Diagnostic for Russian|Broadcoverage Diagnostics (AX-b)|
|Russian Commitment Bank (RCB)|CommitmentBank (CB)|
|Choice of Plausible Alternatives for Russian language (PARus)|Choice of Plausible Alternatives (COPA)|
|Russian Multi-Sentence Reading Comprehension (MuSeRC)|Multi-Sentence Reading Comprehension (MultiRC)|
|Textual Entailment Recognition for Russian (TERRa)|Recognizing Textual Entailment (RTE)|
|Russian Words in Context (based on RUSSE)|Words in Context (WiC)|
|The Winograd Schema Challenge (Russian)|The Winograd Schema Challenge (WSC)|
|Yes/no Question Answering Dataset for the Russian (DaNetQA)|BoolQ|
|Russian Reading Comprehension with Commonsense Reasoning (RuCoS)|Reading Comprehension with Commonsense Reasoning (ReCoRD)|
### Languages
All tasks are in Russian.
## Dataset Structure
### Data Instances
Note that there are no labels in the `test` splits. This is signified by the `-1` value.
#### LiDiRus
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.49 MB
- **Total amount of disk used:** 0.54 MB
An example of 'test' looks as follows
```
{
"sentence1": "Новая игровая консоль доступна по цене.",
"sentence2": "Новая игровая консоль недоступна по цене.",
"knowledge": "",
"lexical-semantics": "Morphological negation",
"logic": "Negation",
"predicate-argument-structure": "",
"idx": 10,
"label": 1
}
```
#### RCB
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.53 MB
- **Total amount of disk used:** 0.67 MB
An example of 'train'/'dev' looks as follows
```
{
"premise": "— Пойдём пообедаем. Я с утра ничего не ел. Отель, как видишь, весьма посредственный, но мне сказали,
что в здешнем ресторане отлично готовят.",
"hypothesis": "В здешнем ресторане отлично готовят.",
"verb": "сказать",
"negation": "no_negation",
"idx": 10,
"label": 2
}
```
An example of 'test' looks as follows
```
{
"premise": "Я уверен, что вместе мы победим. Да, парламентское большинство думает иначе.",
"hypothesis": "Вместе мы проиграем.",
"verb": "думать",
"negation": "no_negation",
"idx": 10,
"label": -1
}
```
#### PARus
- **Size of downloaded dataset files:** 0.06 MB
- **Size of the generated dataset:** 0.20 MB
- **Total amount of disk used:** 0.245 MB
An example of 'train'/'dev' looks as follows
```
{
"premise": "Женщина чинила кран.",
"choice1": "Кран подтекал.",
"choice2": "Кран был выключен.",
"question": "cause",
"idx": 10,
"label": 0
}
```
An example of 'test' looks as follows
```
{
"premise": "Ребятам было страшно.",
"choice1": "Их вожатый рассказал им историю про призрака.",
"choice2": "Они жарили маршмеллоу на костре.",
"question": "cause",
"idx": 10,
"label": -1
}
```
#### MuSeRC
- **Size of downloaded dataset files:** 1.26 MB
- **Size of the generated dataset:** 59.77 MB
- **Total amount of disk used:** 61.87 MB
An example of 'train'/'dev' looks as follows
```
{
"paragraph": "(1) Но люди не могут существовать без природы, поэтому в парке стояли железобетонные скамейки —
деревянные моментально ломали. (2) В парке бегали ребятишки, водилась шпана, которая развлекалась игрой в карты,
пьянкой, драками, «иногда насмерть». (3) «Имали они тут и девок...» (4) Верховодил шпаной Артемка-мыло, с
вспененной белой головой. (5) Людочка сколько ни пыталась усмирить лохмотья на буйной голове Артемки, ничего у
неё не получалось. (6) Его «кудри, издали напоминавшие мыльную пену, изблизя оказались что липкие рожки из
вокзальной столовой — сварили их, бросили комком в пустую тарелку, так они, слипшиеся, неподъёмно и лежали.
(7) Да и не ради причёски приходил парень к Людочке. (8) Как только её руки становились занятыми ножницами
и расчёской, Артемка начинал хватать её за разные места. (9) Людочка сначала увёртывалась от хватких рук Артемки,
а когда не помогло, стукнула его машинкой по голове и пробила до крови, пришлось лить йод на голову «ухажористого
человека». (10) Артемка заулюлюкал и со свистом стал ловить воздух. (11) С тех пор «домогания свои хулиганские
прекратил», более того, шпане повелел Людочку не трогать.",
"question": "Как развлекались в парке ребята?",
"answer": "Развлекались игрой в карты, пьянкой, драками, снимали они тут и девок.",
"idx":
{
"paragraph": 0,
"question": 2,
"answer": 10
},
"label": 1
}
```
An example of 'test' looks as follows
```
{
"paragraph": "\"(1) Издательство Viking Press совместно с компанией TradeMobile выпустят мобильное приложение,
посвященное Анне Франк, передает The Daily Telegraph. (2) Программа будет включать в себя фрагменты из дневника
Анны, озвученные британской актрисой Хеленой Бонэм Картер. (3) Помимо этого, в приложение войдут фотографии
и видеозаписи, документы из архива Фонда Анны Франк, план здания в Амстердаме, где Анна с семьей скрывались от
нацистов, и факсимильные копии страниц дневника. (4) Приложение, которое получит название Anne Frank App, выйдет
18 октября. (5) Интерфейс программы будет англоязычным. (6) На каких платформах будет доступно Anne Frank App,
не уточняется. Анна Франк родилась в Германии в 1929 году. (7) Когда в стране начались гонения на евреев, Анна с
семьей перебрались в Нидерланды. (8) С 1942 года члены семьи Франк и еще несколько человек скрывались от нацистов
в потайных комнатах дома в Амстердаме, который занимала компания отца Анны. (9) В 1944 году группу по доносу
обнаружили гестаповцы. (10) Обитатели \"Убежища\" (так Анна называла дом в дневнике) были отправлены в концлагеря;
выжить удалось только отцу девочки Отто Франку. (11) Находясь в \"Убежище\", Анна вела дневник, в котором описывала
свою жизнь и жизнь своих близких. (12) После ареста книгу с записями сохранила подруга семьи Франк и впоследствии
передала ее отцу Анны. (13) Дневник был впервые опубликован в 1947 году. (14) Сейчас он переведен более
чем на 60 языков.\"",
"question": "Какая информация войдет в новой мобильное приложение?",
"answer": "Видеозаписи Анны Франк.",
"idx":
{
"paragraph": 0,
"question": 2,
"answer": 10
},
"label": -1
}
```
#### TERRa
- **Size of downloaded dataset files:** 0.93 MB
- **Size of the generated dataset:** 3.44 MB
- **Total amount of disk used:** 4.39 MB
An example of 'train'/'dev' looks as follows
```
{
"premise": "Музей, расположенный в Королевских воротах, меняет экспозицию. На смену выставке, рассказывающей об
истории ворот и их реставрации, придет «Аптека трех королей». Как рассказали в музее, посетители попадут в
традиционный интерьер аптеки.",
"hypothesis": "Музей закроется навсегда.",
"idx": 10,
"label": 1
}
```
An example of 'test' looks as follows
```
{
"premise": "Маршрутка полыхала несколько минут. Свидетели утверждают, что приезду пожарных салон «Газели» выгорел полностью. К счастью, пассажиров внутри не было, а водитель успел выскочить из кабины.",
"hypothesis": "Маршрутка выгорела.",
"idx": 10,
"label": -1
}
```
#### RUSSE
- **Size of downloaded dataset files:** 3.88 MB
- **Size of the generated dataset:** 20.97 MB
- **Total amount of disk used:** 25.17 MB
An example of 'train'/'dev' looks as follows
```
{
"word": "дух",
"sentence1": "Завертелась в доме веселая коловерть: праздничный стол, праздничный дух, шумные разговоры",
"sentence2": "Вижу: духи собралися / Средь белеющих равнин. // Бесконечны, безобразны, / В мутной месяца игре / Закружились бесы разны, / Будто листья в ноябре",
"start1": 68,
"start2": 6,
"end1": 72,
"end2": 11,
"gold_sense1": 3,
"gold_sense2": 4,
"idx": 10,
"label": 0
}
```
An example of 'test' looks as follows
```
{
"word": "доска",
"sentence1": "На 40-й день после трагедии в переходе была установлена мемориальная доска, надпись на которой гласит: «В память о погибших и пострадавших от террористического акта 8 августа 2000 года».",
"sentence2": "Фото с 36-летним миллиардером привлекло сеть его необычной фигурой при стойке на доске и кремом на лице.",
"start1": 69,
"start2": 81,
"end1": 73,
"end2": 85,
"gold_sense1": -1,
"gold_sense2": -1,
"idx": 10,
"label": -1
}
```
#### RWSD
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.29 MB
- **Total amount of disk used:** 0.320 MB
An example of 'train'/'dev' looks as follows
```
{
"text": "Женя поблагодарила Сашу за помощь, которую она оказала.",
"span1_index": 0,
"span2_index": 6,
"span1_text": "Женя",
"span2_text": "она оказала",
"idx": 10,
"label": 0
}
```
An example of 'test' looks as follows
```
{
"text": "Мод и Дора видели, как через прерию несутся поезда, из двигателей тянулись клубы черного дыма. Ревущие
звуки их моторов и дикие, яростные свистки можно было услышать издалека. Лошади убежали, когда они увидели
приближающийся поезд.",
"span1_index": 22,
"span2_index": 30,
"span1_text": "свистки",
"span2_text": "они увидели",
"idx": 10,
"label": -1
}
```
#### DaNetQA
- **Size of downloaded dataset files:** 1.36 MB
- **Size of the generated dataset:** 4.82 MB
- **Total amount of disk used:** 5.9 MB
An example of 'train'/'dev' looks as follows
```
{
"question": "Вреден ли алкоголь на первых неделях беременности?",
"passage": "А Бакингем-Хоуз и её коллеги суммировали последствия, найденные в обзорных статьях ранее. Частые случаи
задержки роста плода, результатом чего является укороченный средний срок беременности и сниженный вес при рождении.
По сравнению с нормальными детьми, дети 3-4-недельного возраста демонстрируют «менее оптимальную» двигательную
активность, рефлексы, и ориентацию в пространстве, а дети 4-6 лет показывают низкий уровень работы
нейроповеденческих функций, внимания, эмоциональной экспрессии, и развития речи и языка. Величина этих влияний
часто небольшая, частично в связи с независимыми переменными: включая употребление во время беременности
алкоголя/табака, а также факторы среды . У детей школьного возраста проблемы с устойчивым вниманием и контролем
своего поведения, а также незначительные с ростом, познавательными и языковыми способностями.",
"idx": 10,
"label": 1
}
```
An example of 'test' looks as follows
```
{
"question": "Вредна ли жесткая вода?",
"passage": "Различают временную жёсткость, обусловленную гидрокарбонатами кальция и магния Са2; Mg2, и постоянную
жёсткость, вызванную присутствием других солей, не выделяющихся при кипячении воды: в основном, сульфатов и
хлоридов Са и Mg. Жёсткая вода при умывании сушит кожу, в ней плохо образуется пена при использовании мыла.
Использование жёсткой воды вызывает появление осадка на стенках котлов, в трубах и т. п. В то же время,
использование слишком мягкой воды может приводить к коррозии труб, так как, в этом случае отсутствует
кислотно-щелочная буферность, которую обеспечивает гидрокарбонатная жёсткость. Потребление жёсткой или мягкой
воды обычно не является опасным для здоровья, однако есть данные о том, что высокая жёсткость способствует
образованию мочевых камней, а низкая — незначительно увеличивает риск сердечно-сосудистых заболеваний. Вкус
природной питьевой воды, например, воды родников, обусловлен именно присутствием солей жёсткости.",
"idx": 100,
"label": -1
}
```
#### RuCoS
- **Size of downloaded dataset files:** 56.62 MB
- **Size of the generated dataset:** 202.38 MB
- **Total amount of disk used:** 261.10 MB
An example of 'train'/'dev' looks as follows
```
{
"passage": "В Абхазии 24 августа на досрочных выборах выбирают нового президента. Кто бы ни стал победителем,
возможности его будут ограничены, говорят эксперты, опрошенные DW. В Абхазии 24 августа проходят досрочные выборы
президента не признанной международным сообществом республики. Толчком к их проведению стали массовые протесты в
конце мая 2014 года, в результате которых со своего поста был вынужден уйти действующий президент Абхазии Александр
Анкваб. Эксперты называют среди наиболее перспективных кандидатов находящегося в оппозиции политика Рауля Хаджимбу,
экс-главу службы безопасности Аслана Бжанию и генерала Мираба Кишмарию, исполняющего обязанности министра обороны.
У кого больше шансов\n\"Ставки делаются на победу Хаджимбы.\n@highlight\nВ Швеции задержаны двое граждан РФ в связи
с нападением на чеченского блогера\n@highlight\nТуризм в эпоху коронавируса: куда поехать? И ехать ли
вообще?\n@highlight\nКомментарий: Россия накануне эпидемии - виноватые назначены заранее",
"query": "Несмотря на то, что Кремль вложил много денег как в @placeholder, так и в Южную Осетию, об экономическом
восстановлении данных регионов говорить не приходится, считает Хальбах: \"Многие по-прежнему живут в
полуразрушенных домах и временных жилищах\".",
"entities":
[
"DW.",
"Абхазии ",
"Александр Анкваб.",
"Аслана Бжанию ",
"Мираба Кишмарию,",
"РФ ",
"Рауля Хаджимбу,",
"Россия ",
"Хаджимбы.",
"Швеции "
],
"answers":
[
"Абхазии"
],
"idx":
{
"passage": 500,
"query": 500
}
}
```
An example of 'test' looks as follows
```
{
"passage": "Почему и как изменится курс белорусского рубля? Какие инструменты следует предпочесть населению, чтобы
сохранить сбережения, DW рассказали финансовые аналитики Беларуси. На последних валютных торгах БВФБ 2015 года в
среду, 30 декабря, курс белорусского рубля к доллару - 18569, к евро - 20300, к российскому рублю - 255. В 2016
году белорусскому рублю пророчат падение как минимум на 12 процентов к корзине валют, к которой привязан его курс.
А чтобы избежать потерь, белорусам советуют диверсифицировать инвестиционные портфели. Чем обусловлены прогнозные
изменения котировок белорусского рубля, и какие финансовые инструменты стоит предпочесть, чтобы минимизировать риск
потерь?\n@highlight\nВ Германии за сутки выявлено более 100 новых заражений коронавирусом\n@highlight\nРыночные цены
на нефть рухнули из-за провала переговоров ОПЕК+\n@highlight\nВ Италии за сутки произошел резкий скачок смертей от
COVID-19",
"query": "Последнее, убежден аналитик, инструмент для узкого круга профессиональных инвесторов, культуры следить за
финансовым состоянием предприятий - такой, чтобы играть на рынке корпоративных облигаций, - в @placeholder пока нет.",
"entities":
[
"DW ",
"Беларуси.",
"Германии ",
"Италии ",
"ОПЕК+"
],
"answers": [],
"idx":
{
"passage": 500,
"query": 500
}
}
```
### Data Fields
#### LiDiRus
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `entailment` (0), `not_entailment` (1)
- `sentence1`: a `string` feature
- `sentence2`: a `string` feature
- `knowledge`: a `string` feature with possible values `''`, `'World knowledge'`, `'Common sense'`
- `lexical-semantics`: a `string` feature
- `logic`: a `string` feature
- `predicate-argument-structure`: a `string` feature
#### RCB
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `entailment` (0), `contradiction` (1), `neutral` (2)
- `premise`: a `string` feature
- `hypothesis`: a `string` feature
- `verb`: a `string` feature
- `negation`: a `string` feature with possible values `'no_negation'`, `'negation'`, `''`, `'double_negation'`
#### PARus
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `choice1` (0), `choice2` (1)
- `premise`: a `string` feature
- `choice1`: a `string` feature
- `choice2`: a `string` feature
- `question`: a `string` feature with possible values `'cause'`, `'effect'`
#### MuSeRC
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0) , `true` (1) (does the provided `answer` contain
a factual response to the `question`)
- `paragraph`: a `string` feature
- `question`: a `string` feature
- `answer`: a `string` feature
#### TERRa
- `idx`: an `int32` feature
- `label`: a classification label, with possible values `entailment` (0), `not_entailment` (1)
- `premise`: a `string` feature
- `hypothesis`: a `string` feature
#### RUSSE
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0), `true` (1) (whether the given `word` used in the
same sense in both sentences)
- `word`: a `string` feature
- `sentence1`: a `string` feature
- `sentence2`: a `string` feature
- `gold_sense1`: an `int32` feature
- `gold_sense2`: an `int32` feature
- `start1`: an `int32` feature
- `start2`: an `int32` feature
- `end1`: an `int32` feature
- `end2`: an `int32` feature
#### RWSD
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0), `true` (1) (whether the given spans are
coreferential)
- `text`: a `string` feature
- `span1_index`: an `int32` feature
- `span2_index`: an `int32` feature
- `span1_text`: a `string` feature
- `span2_text`: a `string` feature
#### DaNetQA
- `idx`: an `int32` feature
- `label` : a classification label, with possible values `false` (0), `true` (1) (yes/no answer to the `question` found
in the `passage`)
- `question`: a `string` feature
- `passage`: a `string` feature
#### RuCoS
- `idx`: an `int32` feature
- `passage`: a `string` feature
- `query`: a `string` feature
- `entities`: a `list of strings` feature
- `answers`: a `list of strings` feature
[More Information Needed]
### Data Splits
#### LiDiRus
| |test|
|---|---:|
|LiDiRus|1104|
#### RCB
| |train|validation|test|
|----|---:|----:|---:|
|RCB|438|220|438|
#### PARus
| |train|validation|test|
|----|---:|----:|---:|
|PARus|400|100|500|
#### MuSeRC
| |train|validation|test|
|----|---:|----:|---:|
|MuSeRC|500|100|322|
#### TERRa
| |train|validation|test|
|----|---:|----:|---:|
|TERRa|2616|307|3198|
#### RUSSE
| |train|validation|test|
|----|---:|----:|---:|
|RUSSE|19845|8508|18892|
#### RWSD
| |train|validation|test|
|----|---:|----:|---:|
|RWSD|606|204|154|
#### DaNetQA
| |train|validation|test|
|----|---:|----:|---:|
|DaNetQA|1749|821|805|
#### RuCoS
| |train|validation|test|
|----|---:|----:|---:|
|RuCoS|72193|7577|7257|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
All our datasets are published by MIT License.
### Citation Information
```
@article{shavrina2020russiansuperglue,
title={RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark},
author={Shavrina, Tatiana and Fenogenova, Alena and Emelyanov, Anton and Shevelev, Denis and Artemova, Ekaterina and Malykh, Valentin and Mikhailov, Vladislav and Tikhonova, Maria and Chertok, Andrey and Evlampiev, Andrey},
journal={arXiv preprint arXiv:2010.15925},
year={2020}
}
@misc{fenogenova2022russian,
title={Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP models},
author={Alena Fenogenova and Maria Tikhonova and Vladislav Mikhailov and Tatiana Shavrina and Anton Emelyanov and Denis Shevelev and Alexandr Kukushkin and Valentin Malykh and Ekaterina Artemova},
year={2022},
eprint={2202.07791},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@slowwavesleep](https://github.com/slowwavesleep) for adding this dataset. |
thunlp/docred | thunlp | 2023-06-14T14:07:55Z | 724 | 23 | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"arxiv:1906.06127",
"region:us"
] | [
"text-retrieval"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: docred
pretty_name: DocRED
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
dataset_info:
features:
- name: title
dtype: string
- name: sents
sequence:
sequence: string
- name: vertexSet
list:
list:
- name: name
dtype: string
- name: sent_id
dtype: int32
- name: pos
sequence: int32
- name: type
dtype: string
- name: labels
sequence:
- name: head
dtype: int32
- name: tail
dtype: int32
- name: relation_id
dtype: string
- name: relation_text
dtype: string
- name: evidence
sequence: int32
splits:
- name: validation
num_bytes: 3425030
num_examples: 998
- name: test
num_bytes: 2843877
num_examples: 1000
- name: train_annotated
num_bytes: 10413156
num_examples: 3053
- name: train_distant
num_bytes: 346001876
num_examples: 101873
download_size: 458040413
dataset_size: 362683939
---
# Dataset Card for DocRED
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/thunlp/DocRED](https://github.com/thunlp/DocRED)
- **Paper:** [DocRED: A Large-Scale Document-Level Relation Extraction Dataset](https://arxiv.org/abs/1906.06127)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 21.00 MB
- **Size of the generated dataset:** 20.12 MB
- **Total amount of disk used:** 41.14 MB
### Dataset Summary
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:
- DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.
- DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.
- Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 21.00 MB
- **Size of the generated dataset:** 20.12 MB
- **Total amount of disk used:** 41.14 MB
An example of 'train_annotated' looks as follows.
```
{
"labels": {
"evidence": [[0]],
"head": [0],
"relation_id": ["P1"],
"relation_text": ["is_a"],
"tail": [0]
},
"sents": [["This", "is", "a", "sentence"], ["This", "is", "another", "sentence"]],
"title": "Title of the document",
"vertexSet": [[{
"name": "sentence",
"pos": [3],
"sent_id": 0,
"type": "NN"
}, {
"name": "sentence",
"pos": [3],
"sent_id": 1,
"type": "NN"
}], [{
"name": "This",
"pos": [0],
"sent_id": 0,
"type": "NN"
}]]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `title`: a `string` feature.
- `sents`: a dictionary feature containing:
- `feature`: a `string` feature.
- `name`: a `string` feature.
- `sent_id`: a `int32` feature.
- `pos`: a `list` of `int32` features.
- `type`: a `string` feature.
- `labels`: a dictionary feature containing:
- `head`: a `int32` feature.
- `tail`: a `int32` feature.
- `relation_id`: a `string` feature.
- `relation_text`: a `string` feature.
- `evidence`: a `list` of `int32` features.
### Data Splits
| name |train_annotated|train_distant|validation|test|
|-------|--------------:|------------:|---------:|---:|
|default| 3053| 101873| 998|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{yao-etal-2019-docred,
title = "{D}oc{RED}: A Large-Scale Document-Level Relation Extraction Dataset",
author = "Yao, Yuan and
Ye, Deming and
Li, Peng and
Han, Xu and
Lin, Yankai and
Liu, Zhenghao and
Liu, Zhiyuan and
Huang, Lixin and
Zhou, Jie and
Sun, Maosong",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1074",
doi = "10.18653/v1/P19-1074",
pages = "764--777",
}
```
### Contributions
Thanks to [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
ma2za/many_emotions | ma2za | 2023-06-10T02:18:01Z | 152 | 9 | [
"task_categories:text-classification",
"multilinguality:multilingual",
"source_datasets:dair-ai/emotion",
"source_datasets:daily_dialog",
"source_datasets:go_emotions",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"emotion"
] | [
"text-classification"
] | 2023-05-20T21:59:41Z | 1 | ---
license:
apache-2.0
task_categories:
- text-classification
multilinguality:
- multilingual
source_datasets:
- dair-ai/emotion
- daily_dialog
- go_emotions
language:
- en
size_categories:
- 100K<n<1M
tags:
- emotion
---
# Dataset Card for "many_emotions"
## Dataset Description
- **Homepage:**
### Dataset Summary
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The data fields are:
- `id`: unique identifier
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `anger` (0), `fear` (1), `joy` (2), `love` (
3), `sadness` (4), `surprise` (5), `neutral` (6).
- `license`: inherited license from source dataset
- `dataset`: source dataset
- `language`: text language
### Data Splits
The dataset has 2 configurations:
- raw: with 5 configuration for each language
- split: with configurations train, validation, test
## Dataset Creation
### Curation Rationale
The raw split contains duplicates.
In the split "split" there may be equal rows but with different label.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
## Additional Information
### Licensing Information
Each row has its own license which is inherited from the source dataset. |
lighteval/mmlu | lighteval | 2023-06-09T16:36:19Z | 11,655 | 40 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2009.03300",
"arxiv:2005.00700",
"arxiv:2005.14165",
"arxiv:2008.02275",
"region:us"
] | [
"question-answering"
] | 2023-05-16T09:39:28Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mmlu
pretty_name: Measuring Massive Multitask Language Understanding
language_bcp47:
- en-US
dataset_info:
- config_name: abstract_algebra
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 19328
num_examples: 100
- name: validation
num_bytes: 2024
num_examples: 11
- name: dev
num_bytes: 830
num_examples: 5
download_size: 166184960
dataset_size: 160623559
- config_name: anatomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33121
num_examples: 135
- name: validation
num_bytes: 3140
num_examples: 14
- name: dev
num_bytes: 967
num_examples: 5
download_size: 166184960
dataset_size: 160638605
- config_name: astronomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46771
num_examples: 152
- name: validation
num_bytes: 5027
num_examples: 16
- name: dev
num_bytes: 2076
num_examples: 5
download_size: 166184960
dataset_size: 160655251
- config_name: business_ethics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33252
num_examples: 100
- name: validation
num_bytes: 3038
num_examples: 11
- name: dev
num_bytes: 2190
num_examples: 5
download_size: 166184960
dataset_size: 160639857
- config_name: clinical_knowledge
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 62754
num_examples: 265
- name: validation
num_bytes: 6664
num_examples: 29
- name: dev
num_bytes: 1210
num_examples: 5
download_size: 166184960
dataset_size: 160672005
- config_name: college_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 48797
num_examples: 144
- name: validation
num_bytes: 4819
num_examples: 16
- name: dev
num_bytes: 1532
num_examples: 5
download_size: 166184960
dataset_size: 160656525
- config_name: college_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24708
num_examples: 100
- name: validation
num_bytes: 2328
num_examples: 8
- name: dev
num_bytes: 1331
num_examples: 5
download_size: 166184960
dataset_size: 160629744
- config_name: college_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42641
num_examples: 100
- name: validation
num_bytes: 4663
num_examples: 11
- name: dev
num_bytes: 2765
num_examples: 5
download_size: 166184960
dataset_size: 160651446
- config_name: college_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24711
num_examples: 100
- name: validation
num_bytes: 2668
num_examples: 11
- name: dev
num_bytes: 1493
num_examples: 5
download_size: 166184960
dataset_size: 160630249
- config_name: college_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 82397
num_examples: 173
- name: validation
num_bytes: 7909
num_examples: 22
- name: dev
num_bytes: 1670
num_examples: 5
download_size: 166184960
dataset_size: 160693353
- config_name: college_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 30181
num_examples: 102
- name: validation
num_bytes: 3490
num_examples: 11
- name: dev
num_bytes: 1412
num_examples: 5
download_size: 166184960
dataset_size: 160636460
- config_name: computer_security
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 27124
num_examples: 100
- name: validation
num_bytes: 4549
num_examples: 11
- name: dev
num_bytes: 1101
num_examples: 5
download_size: 166184960
dataset_size: 160634151
- config_name: conceptual_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 40709
num_examples: 235
- name: validation
num_bytes: 4474
num_examples: 26
- name: dev
num_bytes: 934
num_examples: 5
download_size: 166184960
dataset_size: 160647494
- config_name: econometrics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46547
num_examples: 114
- name: validation
num_bytes: 4967
num_examples: 12
- name: dev
num_bytes: 1644
num_examples: 5
download_size: 166184960
dataset_size: 160654535
- config_name: electrical_engineering
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25142
num_examples: 145
- name: validation
num_bytes: 2903
num_examples: 16
- name: dev
num_bytes: 972
num_examples: 5
download_size: 166184960
dataset_size: 160630394
- config_name: elementary_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 70108
num_examples: 378
- name: validation
num_bytes: 8988
num_examples: 41
- name: dev
num_bytes: 1440
num_examples: 5
download_size: 166184960
dataset_size: 160681913
- config_name: formal_logic
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 49785
num_examples: 126
- name: validation
num_bytes: 6252
num_examples: 14
- name: dev
num_bytes: 1757
num_examples: 5
download_size: 166184960
dataset_size: 160659171
- config_name: global_facts
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 18403
num_examples: 100
- name: validation
num_bytes: 1865
num_examples: 10
- name: dev
num_bytes: 1229
num_examples: 5
download_size: 166184960
dataset_size: 160622874
- config_name: high_school_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 109732
num_examples: 310
- name: validation
num_bytes: 11022
num_examples: 32
- name: dev
num_bytes: 1673
num_examples: 5
download_size: 166184960
dataset_size: 160723804
- config_name: high_school_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 58464
num_examples: 203
- name: validation
num_bytes: 7092
num_examples: 22
- name: dev
num_bytes: 1220
num_examples: 5
download_size: 166184960
dataset_size: 160668153
- config_name: high_school_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 44476
num_examples: 100
- name: validation
num_bytes: 3343
num_examples: 9
- name: dev
num_bytes: 2918
num_examples: 5
download_size: 166184960
dataset_size: 160652114
- config_name: high_school_european_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 270300
num_examples: 165
- name: validation
num_bytes: 29632
num_examples: 18
- name: dev
num_bytes: 11564
num_examples: 5
download_size: 166184960
dataset_size: 160912873
- config_name: high_school_geography
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42034
num_examples: 198
- name: validation
num_bytes: 4332
num_examples: 22
- name: dev
num_bytes: 1403
num_examples: 5
download_size: 166184960
dataset_size: 160649146
- config_name: high_school_government_and_politics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66074
num_examples: 193
- name: validation
num_bytes: 7063
num_examples: 21
- name: dev
num_bytes: 1779
num_examples: 5
download_size: 166184960
dataset_size: 160676293
- config_name: high_school_macroeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 117687
num_examples: 390
- name: validation
num_bytes: 13020
num_examples: 43
- name: dev
num_bytes: 1328
num_examples: 5
download_size: 166184960
dataset_size: 160733412
- config_name: high_school_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 54854
num_examples: 270
- name: validation
num_bytes: 5765
num_examples: 29
- name: dev
num_bytes: 1297
num_examples: 5
download_size: 166184960
dataset_size: 160663293
- config_name: high_school_microeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 75703
num_examples: 238
- name: validation
num_bytes: 7553
num_examples: 26
- name: dev
num_bytes: 1298
num_examples: 5
download_size: 166184960
dataset_size: 160685931
- config_name: high_school_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 59538
num_examples: 151
- name: validation
num_bytes: 6771
num_examples: 17
- name: dev
num_bytes: 1489
num_examples: 5
download_size: 166184960
dataset_size: 160669175
- config_name: high_school_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 159407
num_examples: 545
- name: validation
num_bytes: 17269
num_examples: 60
- name: dev
num_bytes: 1905
num_examples: 5
download_size: 166184960
dataset_size: 160779958
- config_name: high_school_statistics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 110702
num_examples: 216
- name: validation
num_bytes: 9997
num_examples: 23
- name: dev
num_bytes: 2528
num_examples: 5
download_size: 166184960
dataset_size: 160724604
- config_name: high_school_us_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 296734
num_examples: 204
- name: validation
num_bytes: 31706
num_examples: 22
- name: dev
num_bytes: 8864
num_examples: 5
download_size: 166184960
dataset_size: 160938681
- config_name: high_school_world_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 378617
num_examples: 237
- name: validation
num_bytes: 45501
num_examples: 26
- name: dev
num_bytes: 4882
num_examples: 5
download_size: 166184960
dataset_size: 161030377
- config_name: human_aging
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46098
num_examples: 223
- name: validation
num_bytes: 4707
num_examples: 23
- name: dev
num_bytes: 1008
num_examples: 5
download_size: 166184960
dataset_size: 160653190
- config_name: human_sexuality
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 32110
num_examples: 131
- name: validation
num_bytes: 2421
num_examples: 12
- name: dev
num_bytes: 1077
num_examples: 5
download_size: 166184960
dataset_size: 160636985
- config_name: international_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 53531
num_examples: 121
- name: validation
num_bytes: 6473
num_examples: 13
- name: dev
num_bytes: 2418
num_examples: 5
download_size: 166184960
dataset_size: 160663799
- config_name: jurisprudence
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33986
num_examples: 108
- name: validation
num_bytes: 3729
num_examples: 11
- name: dev
num_bytes: 1303
num_examples: 5
download_size: 166184960
dataset_size: 160640395
- config_name: logical_fallacies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 50117
num_examples: 163
- name: validation
num_bytes: 5103
num_examples: 18
- name: dev
num_bytes: 1573
num_examples: 5
download_size: 166184960
dataset_size: 160658170
- config_name: machine_learning
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33880
num_examples: 112
- name: validation
num_bytes: 3232
num_examples: 11
- name: dev
num_bytes: 2323
num_examples: 5
download_size: 166184960
dataset_size: 160640812
- config_name: management
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20002
num_examples: 103
- name: validation
num_bytes: 1820
num_examples: 11
- name: dev
num_bytes: 898
num_examples: 5
download_size: 166184960
dataset_size: 160624097
- config_name: marketing
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 63025
num_examples: 234
- name: validation
num_bytes: 7394
num_examples: 25
- name: dev
num_bytes: 1481
num_examples: 5
download_size: 166184960
dataset_size: 160673277
- config_name: medical_genetics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20864
num_examples: 100
- name: validation
num_bytes: 3005
num_examples: 11
- name: dev
num_bytes: 1089
num_examples: 5
download_size: 166184960
dataset_size: 160626335
- config_name: miscellaneous
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 147704
num_examples: 783
- name: validation
num_bytes: 14330
num_examples: 86
- name: dev
num_bytes: 699
num_examples: 5
download_size: 166184960
dataset_size: 160764110
- config_name: moral_disputes
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 107818
num_examples: 346
- name: validation
num_bytes: 12420
num_examples: 38
- name: dev
num_bytes: 1755
num_examples: 5
download_size: 166184960
dataset_size: 160723370
- config_name: moral_scenarios
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 374026
num_examples: 895
- name: validation
num_bytes: 42338
num_examples: 100
- name: dev
num_bytes: 2058
num_examples: 5
download_size: 166184960
dataset_size: 161019799
- config_name: nutrition
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 92410
num_examples: 306
- name: validation
num_bytes: 8436
num_examples: 33
- name: dev
num_bytes: 2085
num_examples: 5
download_size: 166184960
dataset_size: 160704308
- config_name: philosophy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 80073
num_examples: 311
- name: validation
num_bytes: 9184
num_examples: 34
- name: dev
num_bytes: 988
num_examples: 5
download_size: 166184960
dataset_size: 160691622
- config_name: prehistory
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 89594
num_examples: 324
- name: validation
num_bytes: 10285
num_examples: 35
- name: dev
num_bytes: 1878
num_examples: 5
download_size: 166184960
dataset_size: 160703134
- config_name: professional_accounting
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 124550
num_examples: 282
- name: validation
num_bytes: 14372
num_examples: 31
- name: dev
num_bytes: 2148
num_examples: 5
download_size: 166184960
dataset_size: 160742447
- config_name: professional_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 1891762
num_examples: 1534
- name: validation
num_bytes: 203519
num_examples: 170
- name: dev
num_bytes: 6610
num_examples: 5
download_size: 166184960
dataset_size: 162703268
- config_name: professional_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 217561
num_examples: 272
- name: validation
num_bytes: 23847
num_examples: 31
- name: dev
num_bytes: 3807
num_examples: 5
download_size: 166184960
dataset_size: 160846592
- config_name: professional_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 225899
num_examples: 612
- name: validation
num_bytes: 29101
num_examples: 69
- name: dev
num_bytes: 2267
num_examples: 5
download_size: 166184960
dataset_size: 160858644
- config_name: public_relations
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28760
num_examples: 110
- name: validation
num_bytes: 4566
num_examples: 12
- name: dev
num_bytes: 1496
num_examples: 5
download_size: 166184960
dataset_size: 160636199
- config_name: security_studies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 204844
num_examples: 245
- name: validation
num_bytes: 22637
num_examples: 27
- name: dev
num_bytes: 5335
num_examples: 5
download_size: 166184960
dataset_size: 160834193
- config_name: sociology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66243
num_examples: 201
- name: validation
num_bytes: 7184
num_examples: 22
- name: dev
num_bytes: 1613
num_examples: 5
download_size: 166184960
dataset_size: 160676417
- config_name: us_foreign_policy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28443
num_examples: 100
- name: validation
num_bytes: 3264
num_examples: 11
- name: dev
num_bytes: 1611
num_examples: 5
download_size: 166184960
dataset_size: 160634695
- config_name: virology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 38759
num_examples: 166
- name: validation
num_bytes: 5463
num_examples: 18
- name: dev
num_bytes: 1096
num_examples: 5
download_size: 166184960
dataset_size: 160646695
- config_name: world_religions
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25274
num_examples: 171
- name: validation
num_bytes: 2765
num_examples: 19
- name: dev
num_bytes: 670
num_examples: 5
download_size: 166184960
dataset_size: 160630086
---
# Dataset Card for MMLU
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository**: https://github.com/hendrycks/test
- **Paper**: https://arxiv.org/abs/2009.03300
### Dataset Summary
[Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability.
A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions']
### Supported Tasks and Leaderboards
| Model | Authors | Humanities | Social Science | STEM | Other | Average |
|------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:|
| [UnifiedQA](https://arxiv.org/abs/2005.00700) | Khashabi et al., 2020 | 45.6 | 56.6 | 40.2 | 54.6 | 48.9
| [GPT-3](https://arxiv.org/abs/2005.14165) (few-shot) | Brown et al., 2020 | 40.8 | 50.4 | 36.7 | 48.8 | 43.9
| [GPT-2](https://arxiv.org/abs/2005.14165) | Radford et al., 2019 | 32.8 | 33.3 | 30.2 | 33.1 | 32.4
| Random Baseline | N/A | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0
### Languages
English
## Dataset Structure
### Data Instances
An example from anatomy subtask looks as follows:
```
{
"question": "What is the embryological origin of the hyoid bone?",
"choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"],
"answer": "D"
}
```
### Data Fields
- `question`: a string feature
- `choices`: a list of 4 string features
- `answer`: a ClassLabel feature
### Data Splits
- `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc.
- `dev`: 5 examples per subtask, meant for few-shot setting
- `test`: there are at least 100 examples per subtask
| | auxiliary_train | dev | val | test |
| ----- | :------: | :-----: | :-----: | :-----: |
| TOTAL | 99842 | 285 | 1531 | 14042
## Dataset Creation
### Curation Rationale
Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[MIT License](https://github.com/hendrycks/test/blob/master/LICENSE)
### Citation Information
If you find this useful in your research, please consider citing the test and also the [ETHICS](https://arxiv.org/abs/2008.02275) dataset it draws from:
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
### Contributions
Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset.
|
albertvillanova/medmnist-v2 | albertvillanova | 2023-05-30T05:40:52Z | 1,240 | 10 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"task_ids:multi-label-image-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"arxiv:2110.14795",
"region:us",
"medical"
] | [
"image-classification"
] | 2023-05-29T09:00:40Z | 1 | ---
language: en
license: cc-by-4.0
multilinguality:
- monolingual
pretty_name: MedMNIST v2
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
- multi-label-image-classification
paperswithcode_id: medmnist-v2
tags:
- medical
---
# Dataset Card for MedMNIST v2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://medmnist.com/
- **Repository:** https://github.com/MedMNIST/MedMNIST
- **Paper:** [MedMNIST v2 -- A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://arxiv.org/abs/2110.14795)
- **Leaderboard:**
- **Point of Contact:** [Bingbing Ni](mailto:[email protected])
### Dataset Summary
We introduce MedMNIST v2, a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into 28 x 28 (2D) or 28 x 28 x 28 (3D) with the corresponding classification labels, so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST v2 is designed to perform classification on lightweight 2D and 3D images with various data scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression and multi-label). The resulting dataset, consisting of 708,069 2D images and 9,998 3D images in total, could support numerous research / educational purposes in biomedical image analysis, computer vision and machine learning. We benchmark several baseline methods on MedMNIST v2, including 2D / 3D neural networks and open-source / commercial AutoML tools.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`).
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) (CC BY 4.0).
Each subset keeps the same license as that of the source dataset. Please also cite the corresponding paper of source data if you use any subset of MedMNIST.
### Citation Information
If you find this project useful, please cite both v1 and v2 papers:
```
@article{medmnistv2,
title={MedMNIST v2-A large-scale lightweight benchmark for 2D and 3D biomedical image classification},
author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing},
journal={Scientific Data},
volume={10},
number={1},
pages={41},
year={2023},
publisher={Nature Publishing Group UK London}
}
@inproceedings{medmnistv1,
title={MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis},
author={Yang, Jiancheng and Shi, Rui and Ni, Bingbing},
booktitle={IEEE 18th International Symposium on Biomedical Imaging (ISBI)},
pages={191--195},
year={2021}
}
```
Please also cite the corresponding paper(s) of source data if you use any subset of MedMNIST as per the description on the [project website](https://medmnist.com/).
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
|
bigscience/evaluation-results | bigscience | 2023-05-28T00:13:53Z | 29,439 | 10 | [
"task_categories:other",
"size_categories:100M<n<1B",
"region:us"
] | [
"other"
] | 2022-08-01T18:35:58Z | null | ---
pretty_name: evaluation-results
size_categories:
- 100M<n<1B
task_categories:
- other
---
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
```python
from datasets import load_dataset
ds = load_dataset("bigscience/evaluation-results", "bloom")
```
If it takes too long, it may be faster to clone the repository and load the data from disk:
```python
!git clone https://huggingface.co/datasets/bigscience/evaluation-results
ds = load_dataset("evaluation-results", "bloom")
```
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is:
`model_name > evaluation_framework > checkpoint_type > dataset_name > data`
## Evaluation Procedure
- `bigsciencelmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291
- https://github.com/bigscience-workshop/lm-evaluation-harness
- `lmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
- https://github.com/EleutherAI/lm-evaluation-harness
- `codeeval` files were created using the HumanEval code dataset with the below:
- https://github.com/loubnabnl/bloom-code-evaluation
|
AlekseyKorshuk/roleplay-characters | AlekseyKorshuk | 2023-05-27T06:22:09Z | 143 | 21 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-05-27T06:20:12Z | 1 | ---
dataset_info:
features:
- name: char_name
dtype: string
- name: char_persona
dtype: string
- name: world_scenario
dtype: string
- name: char_greeting
dtype: string
- name: example_dialogue
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: personality
dtype: string
- name: scenario
dtype: string
- name: first_mes
dtype: string
- name: mes_example
dtype: string
- name: metadata
struct:
- name: created
dtype: int64
- name: modified
dtype: int64
- name: source
dtype: 'null'
- name: tool
struct:
- name: name
dtype: string
- name: url
dtype: string
- name: version
dtype: string
- name: version
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 474656700.0
num_examples: 784
download_size: 0
dataset_size: 474656700.0
---
# Dataset Card for "roleplay-characters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hlillemark/c4_t5_pretrain | hlillemark | 2023-05-22T16:33:38Z | 26,493 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-05-19T09:17:45Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
splits:
- name: validation
num_bytes: 53400000
num_examples: 10000
- name: train
num_bytes: 961505597520
num_examples: 180057228
download_size: 2939856140
dataset_size: 961558997520
---
# Dataset Card for "c4_t5_pretrain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitpuri/bollywood-celebs | amitpuri | 2023-05-17T17:19:53Z | 27 | 1 | [
"task_categories:image-classification",
"language:en",
"license:mit",
"modality:image",
"region:us"
] | [
"image-classification"
] | 2023-05-03T07:55:38Z | 1 | ---
task_categories:
- image-classification
license: mit
language:
- en
pretty_name: ' bollywood-celebs'
---
# bollywood-celebs
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bollywood-celebs.
Credits: https://www.kaggle.com/datasets/sushilyadav1998/bollywood-celeb-localized-face-dataset
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<64x64 RGB PIL image>",
"target": 15
},
{
"image": "<64x64 RGB PIL image>",
"target": 82
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Aamir_Khan', 'Abhay_Deol', 'Abhishek_Bachchan', 'Aftab_Shivdasani', 'Aishwarya_Rai', 'Ajay_Devgn', 'Akshay_Kumar', 'Akshaye_Khanna', 'Alia_Bhatt', 'Ameesha_Patel', 'Amitabh_Bachchan', 'Amrita_Rao', 'Amy_Jackson', 'Anil_Kapoor', 'Anushka_Sharma', 'Anushka_Shetty', 'Arjun_Kapoor', 'Arjun_Rampal', 'Arshad_Warsi', 'Asin', 'Ayushmann_Khurrana', 'Bhumi_Pednekar', 'Bipasha_Basu', 'Bobby_Deol', 'Deepika_Padukone', 'Disha_Patani', 'Emraan_Hashmi', 'Esha_Gupta', 'Farhan_Akhtar', 'Govinda', 'Hrithik_Roshan', 'Huma_Qureshi', 'Ileana_DCruz', 'Irrfan_Khan', 'Jacqueline_Fernandez', 'John_Abraham', 'Juhi_Chawla', 'Kajal_Aggarwal', 'Kajol', 'Kangana_Ranaut', 'Kareena_Kapoor', 'Karisma_Kapoor', 'Kartik_Aaryan', 'Katrina_Kaif', 'Kiara_Advani', 'Kriti_Kharbanda', 'Kriti_Sanon', 'Kunal_Khemu', 'Lara_Dutta', 'Madhuri_Dixit', 'Manoj_Bajpayee', 'Mrunal_Thakur', 'Nana_Patekar', 'Nargis_Fakhri', 'Naseeruddin_Shah', 'Nushrat_Bharucha', 'Paresh_Rawal', 'Parineeti_Chopra', 'Pooja_Hegde', 'Prabhas', 'Prachi_Desai', 'Preity_Zinta', 'Priyanka_Chopra', 'R_Madhavan', 'Rajkummar_Rao', 'Ranbir_Kapoor', 'Randeep_Hooda', 'Rani_Mukerji', 'Ranveer_Singh', 'Richa_Chadda', 'Riteish_Deshmukh', 'Saif_Ali_Khan', 'Salman_Khan', 'Sanjay_Dutt', 'Sara_Ali_Khan', 'Shah_Rukh_Khan', 'Shahid_Kapoor', 'Shilpa_Shetty', 'Shraddha_Kapoor', 'Shreyas_Talpade', 'Shruti_Haasan', 'Sidharth_Malhotra', 'Sonakshi_Sinha', 'Sonam_Kapoor', 'Suniel_Shetty', 'Sunny_Deol', 'Sushant_Singh_Rajput', 'Taapsee_Pannu', 'Tabu', 'Tamannaah_Bhatia', 'Tiger_Shroff', 'Tusshar_Kapoor', 'Uday_Chopra', 'Vaani_Kapoor', 'Varun_Dhawan', 'Vicky_Kaushal', 'Vidya_Balan', 'Vivek_Oberoi', 'Yami_Gautam', 'Zareen_Khan'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6863 |
| valid | 1764 | |
gofixyourself/EasyPortrait | gofixyourself | 2023-05-12T12:41:47Z | 167 | 7 | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:crowdsourced",
"source_datasets:original",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"modality:image",
"arxiv:2304.13509",
"region:us",
"portrait-segmentation",
"face-parsing",
"face-beautification"
] | [
"image-segmentation"
] | 2023-05-05T10:58:42Z | 1 | ---
license: cc-by-sa-4.0
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
size_categories:
- 10K<n<100K
annotations_creators:
- crowdsourced
source_datasets:
- original
tags:
- portrait-segmentation
- face-parsing
- face-beautification
pretty_name: EasyPortrait
paperswithcode_id: easyportrait
---
# EasyPortrait - Face Parsing and Portrait Segmentation Dataset

We introduce a large-scale image dataset **EasyPortrait** for portrait segmentation and face parsing. Proposed dataset can be used in several tasks, such as background removal in conference applications, teeth whitening, face skin enhancement, red eye removal or eye colorization, and so on.
EasyPortrait dataset size is about **26GB**, and it contains **20 000** RGB images (~17.5K FullHD images) with high quality annotated masks. This dataset is divided into training set, validation set and test set by subject `user_id`. The training set includes 14000 images, the validation set includes 2000 images, and the test set includes 4000 images.
Training images were received from 5,947 unique users, while validation was from 860 and testing was from 1,570. On average, each EasyPortrait image has 254 polygon points, from which it can be concluded that the annotation is of high quality. Segmentation masks were created from polygons for each annotation.
For more information see our paper [EasyPortrait – Face Parsing and Portrait Segmentation Dataset](https://arxiv.org/abs/2304.13509).
## The model results trained on the EasyPortrait dataset
Example of the model work trained on the EasyPortrait dataset and tested on test data from a different domain:


Example of the model work trained on the EasyPortrait dataset and tested on test data with a domain:


## Structure
```
.
├── images.zip
│ ├── train/ # Train set: 14k
│ ├── val/ # Validation set: 2k
│ ├── test/ # Test set: 4k
├── annotations.zip
│ ├── meta.zip # Meta-information (width, height, brightness, imhash, user_id)
│ ├── train/
│ ├── val/
│ ├── test/
...
```
## Annotations
Annotations are presented as 2D-arrays, images in *.png format with several classes:
| Index | Class |
|------:|:-----------|
| 0 | BACKGROUND |
| 1 | PERSON |
| 2 | SKIN |
| 3 | LEFT BROW |
| 4 | RIGHT_BROW |
| 5 | LEFT_EYE |
| 6 | RIGHT_EYE |
| 7 | LIPS |
| 8 | TEETH |
Also, we provide some additional meta-information for dataset in `annotations/meta.zip` file:
| | attachment_id | user_id | data_hash | width | height | brightness | train | test | valid |
|---:|:--------------|:--------|:----------|------:|-------:|-----------:|:------|:------|:------|
| 0 | de81cc1c-... | 1b... | e8f... | 1440 | 1920 | 136 | True | False | False |
| 1 | 3c0cec5a-... | 64... | df5... | 1440 | 1920 | 148 | False | False | True |
| 2 | d17ca986-... | cf... | a69... | 1920 | 1080 | 140 | False | True | False |
where:
- `attachment_id` - image file name without extension
- `user_id` - unique anonymized user ID
- `data_hash` - image hash by using Perceptual hashing
- `width` - image width
- `height` - image height
- `brightness` - image brightness
- `train`, `test`, `valid` are the binary columns for train / test / val subsets respectively
## Authors and Credits
- [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
- [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
- [Sofia Kirillova](https://www.linkedin.com/in/gofixyourself/)
## Links
- [arXiv](https://arxiv.org/abs/2304.13509)
- [Paperswithcode](https://paperswithcode.com/dataset/easyportrait)
- [Kaggle](https://www.kaggle.com/datasets/kapitanov/easyportrait)
- [Habr](https://habr.com/ru/companies/sberdevices/articles/731794/)
- [Gitlab](https://gitlab.aicloud.sbercloud.ru/rndcv/easyportrait)
## Citation
You can cite the paper using the following BibTeX entry:
@article{EasyPortrait,
title={EasyPortrait - Face Parsing and Portrait Segmentation Dataset},
author={Kapitanov, Alexander and Kvanchiani, Karina and Kirillova Sofia},
journal={arXiv preprint arXiv:2304.13509},
year={2023}
}
## License
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a variant of <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
Please see the specific [license](https://github.com/hukenovs/easyportrait/blob/master/license/en_us.pdf). |
jainr3/diffusiondb-pixelart | jainr3 | 2023-05-11T18:59:45Z | 562 | 43 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:modified",
"language:en",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2210.14896",
"region:us",
"stable diffusion",
"prompt engineering",
"prompts"
] | [
"text-to-image",
"image-to-text"
] | 2023-05-11T17:28:21Z | 1 | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB-Pixelart
size_categories:
- n>1T
source_datasets:
- modified
tags:
- stable diffusion
- prompt engineering
- prompts
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# DiffusionDB-Pixelart
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Subset](#subset)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
### Dataset Summary
**This is a subset of the DiffusionDB 2M dataset which has been turned into pixel-style art.**
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Subset
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data was taken from the DiffusionDB 2M and has 2000 examples only.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`|
Images in DiffusionDB-pixelart are stored in `png` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
```bash
# DiffusionDB 2k
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB-pixelart). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `ec9b5e2c-028e-48ac-8857-a52814fd2a06.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://datasets-server.huggingface.co/assets/jainr3/diffusiondb-pixelart/--/2k_all/train/0/image/image.png">
```json
{
"ec9b5e2c-028e-48ac-8857-a52814fd2a06.png": {
"p": "doom eternal, game concept art, veins and worms, muscular, crustacean exoskeleton, chiroptera head, chiroptera ears, mecha, ferocious, fierce, hyperrealism, fine details, artstation, cgsociety, zbrush, no background ",
"se": 3312523387,
"c": 7.0,
"st": 50,
"sa": "k_euler"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Text
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include a metadata table `metadata.parquet` for DiffusionDB-pixelart.
Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` schema:
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`text`|`string`|The text prompt used to generate this image.|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB-pixelart, we split 2k images into folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `2k_random_1k` subset
dataset = load_dataset('jainr3/diffusiondb-pixelart', '2k_random_1k')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact the original author [Jay Wang](https://zijie.wang). |
shibing624/alpaca-zh | shibing624 | 2023-05-10T06:09:06Z | 538 | 122 | [
"task_categories:text-generation",
"language:zh",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2304.03277",
"region:us",
"gpt",
"alpaca",
"fine-tune",
"instruct-tune",
"instruction"
] | [
"text-generation"
] | 2023-03-25T11:37:25Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 32150579
num_examples: 48818
download_size: 35100559
dataset_size: 32150579
license: cc-by-4.0
language:
- zh
pretty_name: Instruction Tuning with GPT-4
size_categories:
- 10K<n<100K
task_categories:
- text-generation
tags:
- gpt
- alpaca
- fine-tune
- instruct-tune
- instruction
---
# Dataset Description
- **Project Page:** https://instruction-tuning-with-gpt-4.github.io
- **Repo:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
# Dataset Card for "alpaca-zh"
本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。
Dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
It is the chinese dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/blob/main/data/alpaca_gpt4_data_zh.json
# Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
train model with alpaca-zh dataset: https://github.com/shibing624/textgen
# English Dataset
[Found here](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data)
# Citation
```
@article{peng2023gpt4llm,
title={Instruction Tuning with GPT-4},
author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
``` |
imvladikon/hebrew_speech_kan | imvladikon | 2023-05-05T09:12:15Z | 224 | 9 | [
"task_categories:automatic-speech-recognition",
"language:he",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"automatic-speech-recognition"
] | 2022-03-02T23:29:22Z | 1 | ---
task_categories:
- automatic-speech-recognition
language:
- he
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1569850175.0
num_examples: 8000
- name: validation
num_bytes: 394275049.0
num_examples: 2000
download_size: 1989406585
dataset_size: 1964125224.0
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hebrew Dataset for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/8ce7402f6482c6053251d7f3000eec88668c994beb48b7ca7352e77ef810a0b6/train/e429593fede945c185897e378a5839f4198.wav',
'array': array([-0.00265503, -0.0018158 , -0.00149536, ..., -0.00135803,
-0.00231934, -0.00190735]),
'sampling_rate': 16000},
'sentence': 'היא מבינה אותי יותר מכל אחד אחר'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 8000 | 2000 |
| hours | 6.92 | 1.73 |
## Dataset Creation
### Curation Rationale
scraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_kan,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Kan},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_kan},
}
```
### Contributions
[More Information Needed] |
mehdie/sefaria | mehdie | 2023-05-01T08:39:56Z | 903 | 3 | [
"language:he",
"language:en",
"license:cc-by-4.0",
"region:us",
"History",
"Rabbinic"
] | [] | 2023-03-31T12:08:29Z | 1 | ---
license: cc-by-4.0
language:
- he
- en
tags:
- History
- Rabbinic
pretty_name: Sefaria HF Dataset
---
This Dataset is a Hugging Face interface to the [Sefaria database export](https://github.com/Sefaria/Sefaria-Export)
Sefaria is a large collection of early Jewish texts, mostly in ancient Hebrew, but also some are in
Aramaic, and some are translations into English.
|
fujiki/wiki40b_ja | fujiki | 2023-04-28T23:35:57Z | 672 | 4 | [
"language:ja",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-28T23:14:50Z | 1 | ---
license: cc-by-sa-4.0
language:
- ja
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1954209746
num_examples: 745392
- name: validation
num_bytes: 107186201
num_examples: 41576
- name: test
num_bytes: 107509760
num_examples: 41268
download_size: 420085060
dataset_size: 2168905707
---
This dataset is a reformatted version of the Japanese portion of [wiki40b](https://aclanthology.org/2020.lrec-1.297/) dataset.
When you use this dataset, please cite the original paper:
```
@inproceedings{guo-etal-2020-wiki,
title = "{W}iki-40{B}: Multilingual Language Model Dataset",
author = "Guo, Mandy and
Dai, Zihang and
Vrande{\v{c}}i{\'c}, Denny and
Al-Rfou, Rami",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.297",
pages = "2440--2452",
abstract = "We propose a new multilingual language model benchmark that is composed of 40+ languages spanning several scripts and linguistic families. With around 40 billion characters, we hope this new resource will accelerate the research of multilingual modeling. We train monolingual causal language models using a state-of-the-art model (Transformer-XL) establishing baselines for many languages. We also introduce the task of multilingual causal language modeling where we train our model on the combined text of 40+ languages from Wikipedia with different vocabulary sizes and evaluate on the languages individually. We released the cleaned-up text of 40+ Wikipedia language editions, the corresponding trained monolingual language models, and several multilingual language models with different fixed vocabulary sizes.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
|
jkot/parliament_hearings_processed | jkot | 2023-04-25T08:53:38Z | 20,640 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-21T10:06:00Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 51234859011.0
num_examples: 191455
- name: test
num_bytes: 762989296.0
num_examples: 2726
download_size: 51507735963
dataset_size: 51997848307.0
---
# Preprocessed parliament hearings ASR dataset to truecased form.
## Original dataset: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3126
---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: string
splits:
- name: train
num_bytes: 53645064353.18
num_examples: 191455
- name: test
num_bytes: 740331298.0
num_examples: 2726
download_size: 51507379112
dataset_size: 54385395651.18
--- |
latentcat/animesfw | latentcat | 2023-04-24T14:10:44Z | 12,931 | 23 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-19T15:24:32Z | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: tags
dtype: string
splits:
- name: train
num_bytes: 968422627084.875
num_examples: 3969879
download_size: 4471804726
dataset_size: 968422627084.875
---
# Dataset Card for "animesfw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nomic-ai/gpt4all_prompt_generations | nomic-ai | 2023-04-13T21:42:15Z | 175 | 129 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2023-03-27T23:08:01Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 782175193
num_examples: 437604
download_size: 397878357
dataset_size: 782175193
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for [GPT4All Prompt Generations]
## Dataset Description
Dataset used to train [GPT4All](https://huggingface.co/nomic-ai/gpt4all-lora)
- **Homepage:**
- **Repository:** [gpt4all](https://github.com/nomic-ai/gpt4all)
- **Paper:** [Technical Report](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf)
- **Atlas Map:** [Map of Cleaned Data](https://atlas.nomic.ai/map/gpt4all_data_clean)
|
philschmid/sharegpt-raw | philschmid | 2023-04-04T08:52:59Z | 91 | 87 | [
"license:other",
"modality:text",
"region:us"
] | [] | 2023-04-04T08:52:59Z | null | ---
license: other
duplicated_from: jeffwan/sharegpt_vicuna
---
## Prepraration
```
pip3 install -r requirements.txt
```
## Data Cleaning
1. merge two raw json files and json beautify the merged file
```
python merge.py sharegpt_90k_raw_dataset/sg_90k_part1.json sharegpt_90k_raw_dataset/sg_90k_part2.json sharegpt_20230401_html_unformatted.json
python pretty_json.py --in sharegpt_20230401_html_unformatted.json --out sharegpt_20230401_html.json
```
2. (Optional) Verify the json file
```
if jq empty sharegpt_20230401_html.json 2>/dev/null; then
echo "JSON is valid"
else
echo "JSON is invalid"
fi
jq length sharegpt_90k_raw_dataset/sg_90k_part1.json
jq length sharegpt_90k_raw_dataset/sg_90k_part2.json
jq length sharegpt_20230401_html.json
```
3. clean data - remove html tags etc
```
python3 clean_sharegpt.py --in sharegpt_20230401_html.json --out sharegpt_20230401_clean.json
....
100%|███████████████████████████████████████████████████████████████████| 90665/90665 [06:32<00:00, 230.98it/s]
total: 90665, skip: 13745, new: 76920
```
4. Filter dataset by language
```
python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_zh.json --lang zh
....
return 6240 out of 76920, start dump ...
python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_en.json --lang en
...
return 55413 out of 76920, start dump ...
```
> Note: the code itself doesn't support languange list, I didn't change the code for adpation. You can change the code to support more languages. Instead, I just filter two languages I need and merge the `sharegpt_20230401_clean_lang_zh.json` and `sharegpt_20230401_clean_lang_en.json` into `sharegpt_20230401_clean_lang.json`.
5. Split the long conversation
```
python3 split_long_conversation.py --in sharegpt_20230401_clean_lang.json --out sharegpt_20230401_clean_lang_split.json --model-name /home/ubuntu/llama-13b-hf/
...
total: 61653, new: 126032
```
Ok, now we have the cleaned dataset `sharegpt_20230401_clean_lang_split.json` which should be used for finetuning.
|
BelleGroup/train_1M_CN | BelleGroup | 2023-04-03T08:23:17Z | 336 | 149 | [
"task_categories:text2text-generation",
"language:zh",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2023-03-31T08:53:50Z | null | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约100万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
BelleGroup/train_0.5M_CN | BelleGroup | 2023-04-03T08:11:22Z | 477 | 108 | [
"task_categories:text2text-generation",
"language:zh",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2023-03-31T10:17:49Z | null | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约50万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
intfloat/query2doc_msmarco | intfloat | 2023-03-30T02:44:59Z | 259 | 16 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2303.07678",
"region:us"
] | [] | 2023-03-10T10:28:59Z | 1 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 100K<n<1M
---
### Dataset Summary
This dataset contains GPT-3.5 (`text-davinci-003`) generations from MS-MARCO queries.
[Query2doc: Query Expansion with Large Language Models](https://arxiv.org/pdf/2303.07678.pdf) Liang Wang, Nan Yang and Furu Wei
### Data Instances
An example looks as follows.
```
{
"query_id": "1030303",
"query": "who is aziz hashim",
"pseudo_doc": "Aziz Hashim is a renowned entrepreneur, business leader, and one of the most successful restaurant franchise operators in the US. He is the founder of NRD Capital, a private equity firm focused on investments in multi-unit restaurant franchised businesses. Hashim has built a formidable track record of success in the franchise industry, with brands such as Outback Steakhouse and Jamba Juice. His accomplishments and philanthropic initiatives have earned him numerous awards, including the prestigious Ernst and Young Entrepreneur of the Year award."
}
```
### Data Fields
- `query_id`: a `string` feature.
- `query`: a `string` feature.
- `pseudo_doc`: a `string` feature.
### Data Splits
| train | dev | test | trec_dl2019 | trec_dl2020 |
|--------|------:|------:|------:|------:|
| 502939 | 6980 | 6837 | 43 | 54 |
### How to use this dataset
```python
from datasets import load_dataset
dataset = load_dataset('intfloat/query2doc_msmarco')
print(dataset['trec_dl2019'][0])
```
### Reproducing our results
We provide a python script [repro_bm25.py](https://huggingface.co/datasets/intfloat/query2doc_msmarco/blob/main/repro_bm25.py) to reproduce our results with BM25 retrieval.
First install some python dependency packages:
```
pip install pyserini==0.15.0 pytrec_eval datasets tqdm
```
Then download and run the python code:
```
python repro_bm25.py
```
This script utilizes the pre-built Lucene index from [Pyserini](https://github.com/castorini/pyserini/blob/pyserini-0.15.0/docs/prebuilt-indexes.md)
and might yield slightly different results compared to the paper.
### Citation Information
```
@article{wang2023query2doc,
title={Query2doc: Query Expansion with Large Language Models},
author={Wang, Liang and Yang, Nan and Wei, Furu},
journal={arXiv preprint arXiv:2303.07678},
year={2023}
}
```
|
gigant/tib_slides_wip | gigant | 2023-03-26T16:22:49Z | 20,389 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-26T00:20:40Z | null | ---
dataset_info:
features:
- name: Image
dtype: image
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 161850916866.84
num_examples: 595458
download_size: 29396407498
dataset_size: 161850916866.84
---
# Dataset Card for "tib_slides_wip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceGECLM/REDDIT_comments | HuggingFaceGECLM | 2023-03-17T07:52:51Z | 40,739 | 12 | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2001.08435",
"region:us",
"reddit",
"social-media"
] | [
"text-generation"
] | 2023-03-15T14:14:58Z | null | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
Deysi/spanish-chinese | Deysi | 2023-03-11T18:08:09Z | 175 | 12 | [
"task_categories:translation",
"language:es",
"language:zh",
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"language",
"translation",
"traducción",
"idiomas",
"chino",
"chinese",
"español",
"spanish",
"Universidad de La Rioja"
] | [
"translation"
] | 2023-03-11T16:22:23Z | 1 | ---
dataset_info:
features:
- name: spanish
dtype: string
- name: chinese
dtype: string
splits:
- name: train
num_bytes: 3048111118.5537825
num_examples: 9092567
- name: test
num_bytes: 762027863.4462174
num_examples: 2273142
download_size: 2473454462
dataset_size: 3810138982
license: apache-2.0
task_categories:
- translation
language:
- es
- zh
tags:
- language
- translation
- traducción
- idiomas
- chino
- chinese
- español
- spanish
- Universidad de La Rioja
pretty_name: Spanish and Chinese aligned sentences
size_categories:
- 10M<n<100M
---
# Dataset Card for "spanish-chinese"
All sensences extracted from the United Nations Parallel Corpus v1.0.
The parallel corpus consists of manually translated United Nations documents for the six
official UN languages, Arabic, Chinese, English, French, Russian, and Spanish.
The corpus is freely available for download at https://conferences.unite.un.org/UNCorpus
under the terms of use outlined in the attached DISCLAIMER.
The original individual documents are available at the United Nations Official Document
System (ODS) at http://ods.un.org.
Reference:
Ziemski, M., Junczys-Dowmunt, M., and Pouliquen, B., (2016), The United Nations Parallel
Corpus, Language Resources and Evaluation (LREC’16), Portorož, Slovenia, May 2016. |
yizhongw/self_instruct | yizhongw | 2023-03-07T10:07:36Z | 1,179 | 193 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2212.10560",
"arxiv:2204.07705",
"region:us"
] | [] | 2023-03-02T14:29:46Z | null | ---
license: apache-2.0
dataset_info:
- config_name: self_instruct
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 20527462
num_examples: 82612
download_size: 24113858
dataset_size: 20527462
- config_name: human_eval
features:
- name: id
dtype: string
- name: motivation_app
dtype: string
- name: instruction
dtype: string
- name: instances
sequence:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 151244
num_examples: 252
download_size: 170193
dataset_size: 151244
- config_name: super_natural_instructions
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 40352923
num_examples: 50000
- name: test
num_bytes: 9713953
num_examples: 11810
download_size: 52975509
dataset_size: 50066876
- config_name: prompt_source
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 57368889
num_examples: 52657
download_size: 60126945
dataset_size: 57368889
- config_name: p3
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 57368889
num_examples: 52657
download_size: 60126945
dataset_size: 57368889
---
# Dataset Card for Self Instruct
## Table of Contents
- [Dataset Card for Self Instruct](#dataset-card-for-self-instruct)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [self\_instruct](#self_instruct)
- [super\_natural\_instructions](#super_natural_instructions)
- [p3](#p3)
- [human\_eval](#human_eval)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [self\_instruct](#self_instruct-1)
- [super\_natural\_instructions](#super_natural_instructions-1)
- [p3](#p3-1)
- [human\_eval](#human_eval-1)
- [Data Fields](#data-fields)
- [self\_instruct](#self_instruct-2)
- [super\_natural\_instructions](#super_natural_instructions-2)
- [p3](#p3-2)
- [human\_eval](#human_eval-2)
- [Data Splits](#data-splits)
- [self\_instruct](#self_instruct-3)
- [super\_natural\_instructions](#super_natural_instructions-3)
- [p3](#p3-3)
- [human\_eval](#human_eval-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/yizhongw/self-instruct
- **Paper:** https://arxiv.org/abs/2212.10560
- **Leaderboard:**
- **Point of Contact:** Yizhong Wang
### Dataset Summary
Self-Instruct is a framework that helps language models improve their ability to follow natural language instructions. It does this by using the model's own generations to create a large collection of instructional data. With Self-Instruct, it is possible to improve the instruction-following capabilities of language models without relying on extensive manual annotation.
A part of this framework, the Self-Instruct authors released a dataset that contains 52k instructions, paired with 82K instance inputs and outputs. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors also released a new set of 252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the Self Instruct paper](https://arxiv.org/abs/2212.10560).
To enable comparison on public datasets, Self-Instruct also contains 50k examples from the P3 and Super Natural Instructions datasets.
### Supported Tasks and Leaderboards
The datasets in Self-Instruct are designed for _instruction training_ pretrained language models. The following subsets are provided as part of Self Instruct.
#### self_instruct
82k prompts and model completions generated via OpenAI's `davinci` engine.
#### super_natural_instructions
50k expert written instructions and demonstrations sampled from the [Super Natural Instructions dataset](https://arxiv.org/abs/2204.07705)
#### p3
50k crowd-sourced instructions and demonstrations sampled from the [Public Pool of Prompts (P3) dataset](https://huggingface.co/datasets/bigscience/P3)
#### human_eval
252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the self-instruct paper](https://arxiv.org/abs/2212.10560). Please refer to [the human evaluation information]([human_eval/README.md](https://github.com/yizhongw/self-instruct/blob/main/human_eval/README.md)) for more details.
### Languages
The data in Self Instruct are in English (BCP-47 en).
## Dataset Structure
### Data Instances
#### self_instruct
An example pf "train" looks as follows:
```json
{
"prompt": "Explain the origin of life on earth. Output:",
"completion": " Life on Earth is believed to have begun about 3.5 billion years ago, when the planet was still in its early stages of formation. The earliest evidence for life on Earth are fossils of microorganisms (bacteria and archaea) that lived approximately 3.5 billion years ago. These organisms were capable of converting carbon dioxide into organic compounds using sunlight as an energy source. This process is known as photosynthesis.",
}
```
#### super_natural_instructions
An example pf "train" looks as follows:
```json
{
"prompt": "Write a correct answer for the question. You should reason about prototypical situations and provide the correct answer. Avoid answers that provide incomplete justification for the question.\n\nInput: name something people often buy on credit.\n\nOutput:",
"completion": " appliances",
}
```
#### p3
An example pf "train" looks as follows:
```json
{
"prompt": "Determine if the following two sentences paraphrase each other or not.\nSent 1: By adding sufficient oxygen to compensate for the metabolic usage , rebreathing the carbon dioxide and removing the gas , most of the volume is conserved .\nSent 2: By adding sufficient oxygen to compensate for the metabolic consumption , removing the carbon dioxide and reinhaling the gas , most of the volume is conserved .\n",
"completion": "No",
}
```
#### human_eval
An example pf "train" looks as follows:
```json
{
"id": "user_oriented_task_136",
"motivation_app": "Goodreads",
"instruction": "Choose the best books from the given genre.",
"instances": {
"input": ["Crime & Mystery"],
"output": [
"1- The Girl with the Dragon Tattoo\n2- And Then There Were None\n3- Angels & Demons\n4- Rebecca\n5- In Cold Blood\n6- The Godfather\n7- The Lovely Bones\n8- Gone Girl\n9- The Name of the Rose\n10- Shutter Island"
],
},
}
```
### Data Fields
The data fields for each configuration are as follows.
#### self_instruct
* `prompt`: The instruction provided to the model or human labeler.
* `completion`: A completion provided by the model or human labeler.
#### super_natural_instructions
* `prompt`: The instruction provided to the model or human labeler.
* `completion`: A completion provided by the model or human labeler.
#### p3
* `prompt`: The instruction provided to the model or human labeler.
* `completion`: A completion provided by the model or human labeler.
#### human_eval
* `id`: The ID associated with the labelling task
* `motivation_app`: The application associated with the task
* `instruction`: The instruction written by the human labeler.
* `instances.input`: The input that forms part of the complete instruction
* `instances.output`: The human written demonstration
### Data Splits
#### self_instruct
| | train |
|---------------|------:|
| self_instruct | 82612 |
#### super_natural_instructions
| | train | test |
|----------------------------|------:|------:|
| super_natural_instructions | 50000 | 11810 |
#### p3
| | train |
|----|------:|
| p3 | 52657 |
#### human_eval
| | train |
|------------|------:|
| human_eval | 252 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `self_instruct` data is generated by a language model (GPT-3) and inevitably contains some errors or biases. The authors analyzed the data quality on 200 random instructions in our paper, and found that 46% of the data points may have problems. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
}
``` |
Shirali/ISSAI_KSC_335RS_v_1_1 | Shirali | 2023-03-07T03:18:44Z | 141 | 3 | [
"task_categories:automatic-speech-recognition",
"language:kk",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"automatic-speech-recognition"
] | 2023-02-25T06:43:34Z | 1 | ---
dataset_info:
features:
- name: uttID
dtype: string
- name: deviceID
dtype: int64
- name: text
dtype: string
- name: audio
dtype: audio
splits:
- name: dev
num_bytes: 391608860.227
num_examples: 3283
- name: test
num_bytes: 372725363.792
num_examples: 3334
- name: train
num_bytes: 19832618976.144
num_examples: 147236
download_size: 19079278086
dataset_size: 20596953200.163002
task_categories:
- automatic-speech-recognition
language:
- kk
---
# Dataset Card for "ISSAI_KSC_335RS_v_1_1"
Kazakh Speech Corpus (KSC)
Identifier: SLR102
Summary: A crowdsourced open-source Kazakh speech corpus developed by ISSAI (330 hours)
Category: Speech
License: Attribution 4.0 International (CC BY 4.0)
Downloads (use a mirror closer to you):
ISSAI_KSC_335RS_v1.1_flac.tar.gz [19G] (speech, transcripts and metadata ) Mirrors: [US] [EU] [CN]
About this resource:
A crowdsourced open-source speech corpus for the Kazakh language. The KSC contains around 332 hours of transcribed audio comprising over 153,000 utterances spoken by participants from different regions and age groups, as well as both genders. It was carefully inspected by native Kazakh speakers to ensure high quality. The dataset is primarily intended to be used for training automatic speech recognition systems.
You can find more information about the dataset here.
To cite the dataset, please use the following BibTeX entry:
@inproceedings{khassanov-etal-2021-crowdsourced,
title = "A Crowdsourced Open-Source {K}azakh Speech Corpus and Initial Speech Recognition Baseline",
author={Yerbolat Khassanov and Saida Mussakhojayeva and Almas Mirzakhmetov and Alen Adiyev and Mukhamet Nurpeiissov and Huseyin Atakan Varol},
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.58",
doi = "10.18653/v1/2021.eacl-main.58",
pages = "697--706"
}
|
SirNeural/flan_v2 | SirNeural | 2023-02-24T19:05:00Z | 4,699 | 193 | [
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2301.13688",
"region:us",
"flan",
"flan 2022",
"flan v2"
] | [] | 2023-02-13T23:02:33Z | null | ---
license: apache-2.0
tags:
- flan
- flan 2022
- flan v2
pretty_name: Flan v2
---
# Dataset Card for Flan V2
## Dataset Description
- **Homepage:** https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html
- **Repository:** https://github.com/google-research/FLAN/tree/main/flan/v2
- **Paper:** https://arxiv.org/abs/2301.13688
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a processed version of the Flan V2 dataset.
I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream.
## Setup Instructions
Here are the steps I followed to get everything working:
### Build AESLC and WinoGrande datasets manually
The repos for these datasets were updated recently and checksums need to be recomputed in TFDS
- `tfds build --dataset aeslc --register_checksums`
- `tfds build --dataset winogrande --register_checksums`
### Fix dataset versions
I've opened a PR [here](https://github.com/google-research/FLAN/pull/20) to get these updated in the upstream FLAN repo, until that gets merged in run these locally to fix any dataset version errors.
- `sed -i 's/glue\/cola:1.0.0/glue\/cola:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/common_gen:1.0.0/gem\/common_gen:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/dart:1.0.0/gem\/dart:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/e2e_nlg:1.0.0/gem\/e2e_nlg:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/web_nlg_en:1.0.0/gem\/web_nlg_en:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/gem\/common_gen:1.0.0/gem\/common_gen:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/paws_wiki:1.0.0/paws_wiki:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/mrpc:1.0.0/glue\/mrpc:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/qqp:1.0.0/glue\/qqp:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/sst2:1.0.0/glue\/sst2:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/mnli:1.0.0/glue\/mnli:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/qnli:1.0.0/glue\/qnli:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/wnli:1.0.0/glue\/wnli:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/glue\/stsb:1.0.0/glue\/stsb:2.0.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/hellaswag:0.0.1/hellaswag:1.1.0/g' flan/v2/task_configs_v1.py`
- `sed -i 's/xsum:1.0.0/huggingface:xsum/g' flan/v2/task_configs_v1.py`
### Download and install manual steps
Save these to `~/tensorflow_datasets/downloads/manual`.
- [CzEng (deduped ignoring sections)](https://ufal.mff.cuni.cz/czeng/czeng16pre)
- [Newsroom (extract)](https://lil.nlp.cornell.edu/newsroom/download/index.html)
- [Yandex 1M Corpus](https://translate.yandex.ru/corpus?lang=en)
- [Story Cloze (extract and rename to cloze_test_test__spring2016.csv and cloze_test_val__spring2016.csv)](https://cs.rochester.edu/nlp/)
### Finally, export tasks
```python
import tensorflow as tf
tf.config.set_visible_devices([], 'GPU')
from flan.v2 import constants
from flan.v2 import constants_t0
from flan.v2 import mixtures_utils
from flan.v2 import mixtures
from flan.v2 import tasks
import json
import t5
import seqio
import itertools
from multiprocessing import Pool
seqio.add_global_cache_dirs(constants.CACHE_DIRS)
seqio.set_global_cache_dirs(constants.CACHE_DIRS)
vocab = t5.data.get_default_vocabulary()
def prepare_task(split, shots, opt, task):
dataset = seqio.get_mixture_or_task(f'palmflan_{task}_{shots}_{opt}').get_dataset(
split=split,
num_epochs=1,
sequence_length={'inputs':4096,'targets':4096}
)
print("starting", task, shots, opt, split)
with open(f'./data/{task}_{shots}_{opt}_{split}.jsonl', 'w') as f:
for ex in dataset.as_numpy_iterator():
f.write(
json.dumps({
"inputs": vocab.decode(ex["inputs"]),
"targets": vocab.decode(ex["targets"]),
"task": task,
}))
f.write("\n")
print("done with", task, shots, opt, split)
# prepare_task("train", "zs", "noopt", "dialog") # use this to export a single task
tasks = itertools.product(["train"], ["zs", "fs"], ["opt", "noopt"], ["dialog", "t0", "niv2", "flan", "cot"])
with Pool(5) as p:
p.starmap(prepare_task, [(task[0], task[1], task[2], task[3]) for task in tasks])
```
## Dataset Structure
### Data Instances
Flan 2021 (flan), P3 (t0), Super-Natural Instructions (niv2), Chain-of-thought (cot), and Dialog (dialog)
### Data Fields
Instruction data comes in a few formats:
- Few Shot (fs)
- Zero Shot (zs)
- Options Provided in context (i.e. multiple choice pick one) (opt)
- No Options Provided (noopt)
Each combination of the above tasks + formats are saved as a JSONL with following schema `{"input": ..., "target": ..., "task": ...}`
### Data Splits
Everything is saved as a train split
Note: FLAN-fs-opt-train is too big to be uploaded even when gzipped, so its split into 45gb chunks. To combine and recover, run `cat flan_fs_opt_train_*.gz | gunzip -c > flan_fs_opt_train.jsonl`
|
lsb/pile | lsb | 2023-02-18T10:00:39Z | 39,072 | 1 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-02-17T03:26:26Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: pile_set_name
dtype: string
splits:
- name: train
num_bytes: 1311748175503
num_examples: 210607728
- name: validation
num_bytes: 1348824258
num_examples: 214670
- name: test
num_bytes: 1317125199
num_examples: 214584
download_size: 539336008819
dataset_size: 1314414124960
---
# Dataset Card for "pile"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ai-forever/school_notebooks_RU | ai-forever | 2023-02-09T18:27:24Z | 167 | 16 | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"source_datasets:original",
"language:ru",
"license:mit",
"region:us",
"optical-character-recognition",
"text-detection",
"ocr"
] | [
"image-segmentation",
"object-detection"
] | 2022-09-08T10:06:32Z | 1 | ---
language:
- ru
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# School Notebooks Dataset
The images of school notebooks with handwritten notes in Russian.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. |
tj-solergibert/Europarl-ST | tj-solergibert | 2023-02-09T10:22:06Z | 114 | 4 | [
"task_categories:translation",
"task_categories:text-to-speech",
"language:es",
"language:de",
"language:en",
"language:fr",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:it",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation",
"text-to-speech"
] | 2023-02-08T22:47:18Z | 1 | ---
dataset_info:
features:
- name: original_speech
dtype: string
- name: original_language
dtype: string
- name: audio_path
dtype: string
- name: segment_start
dtype: float32
- name: segment_end
dtype: float32
- name: transcriptions
struct:
- name: de
dtype: string
- name: en
dtype: string
- name: es
dtype: string
- name: fr
dtype: string
- name: it
dtype: string
- name: nl
dtype: string
- name: pl
dtype: string
- name: pt
dtype: string
- name: ro
dtype: string
splits:
- name: train
num_bytes: 147857910
num_examples: 116138
- name: valid
num_bytes: 21318985
num_examples: 17538
- name: test
num_bytes: 22580968
num_examples: 18901
download_size: 109205144
dataset_size: 191757863
task_categories:
- translation
- text-to-speech
language:
- es
- de
- en
- fr
- nl
- pl
- pt
- ro
- it
size_categories:
- 100K<n<1M
license: cc-by-nc-4.0
---
# Dataset Card for "Europarl-ST"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.mllp.upv.es/europarl-st/
- **Paper:** https://ieeexplore.ieee.org/document/9054626
- **Point of Contact:** https://www.mllp.upv.es/
### Dataset Summary
Europarl-ST is a Multilingual Speech Translation Corpus, that contains paired audio-text samples for Speech Translation, constructed using the debates carried out in the European Parliament in the period between 2008 and 2012.
### Languages
Spanish, German, English, French, Dutch, Polish, Portuguese, Romanian, Italian
## Dataset Structure
### Data Fields
- **original_audio:** The original speech that is heard on the recording.
- **original_language:** The language of the audio
- **audio_path:** Path to the audio file
- **segment_start:** Second in which the speech begins
- **segment_end:** Second in which the speech ends
- **transcriptions:** Dictionary containing transcriptions into different languages
### Data Splits
- **train split:** 116138 samples
- **valid split:** 17538 samples
- **test split:** 18901 samples
Train set (hours):
| src/tgt | en | fr | de | it | es | pt | pl | ro | nl |
|---------|----|----|----|----|----|----|----|----|----|
| en | - | 81 | 83 | 80 | 81 | 81 | 79 | 72 | 80 |
| fr | 32 | - | 21 | 20 | 21 | 22 | 20 | 18 | 22 |
| de | 30 | 18 | - | 17 | 18 | 18 | 17 | 17 | 18 |
| it | 37 | 21 | 21 | - | 21 | 21 | 21 | 19 | 20 |
| es | 22 | 14 | 14 | 14 | - | 14 | 13 | 12 | 13 |
| pt | 15 | 10 | 10 | 10 | 10 | - | 9 | 9 | 9 |
| pl | 28 | 18 | 18 | 17 | 18 | 18 | - | 16 | 18 |
| ro | 24 | 12 | 12 | 12 | 12 | 12 | 12 | - | 12 |
| nl | 7 | 5 | 5 | 4 | 5 | 4 | 4 | 4 | - |
Valid/Test sets are all between 3 and 6 hours.
## Additional Information
### Licensing Information
* The work carried out for constructing the Europarl-ST corpus is released under a Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0)
* All rights of the data belong to the European Union and respective copyright holders.
### Citation Information
If you use the corpus in your research please cite the following reference:
@INPROCEEDINGS{jairsan2020a,
author={J. {Iranzo-Sánchez} and J. A. {Silvestre-Cerdà} and J. {Jorge} and N. {Roselló} and A. {Giménez} and A. {Sanchis} and J. {Civera} and A. {Juan}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Europarl-ST: A Multilingual Corpus for Speech Translation of Parliamentary Debates},
year={2020},
pages={8229-8233},} |
range3/cc100-ja | range3 | 2023-02-04T05:43:32Z | 296 | 20 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"license:unknown",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2023-02-04T05:10:34Z | 1 | ---
license: unknown
task_categories:
- text-generation
- fill-mask
language:
- ja
---
# range3/cc100-ja
This dataset consists of parquet files from the cc100 dataset with only the Japanese language extracted and sharded.
このデータセットは、cc100データセットの日本語のみを抽出し、シャーディングしたparquetファイルで構成されます。 |
lukaemon/bbh | lukaemon | 2023-02-02T01:14:46Z | 23,072 | 61 | [
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2210.09261",
"region:us"
] | [] | 2023-02-01T07:46:51Z | null | ---
dataset_info:
- config_name: boolean_expressions
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 11790
num_examples: 250
download_size: 17172
dataset_size: 11790
- config_name: causal_judgement
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 198021
num_examples: 187
download_size: 202943
dataset_size: 198021
- config_name: date_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 54666
num_examples: 250
download_size: 61760
dataset_size: 54666
- config_name: disambiguation_qa
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 78620
num_examples: 250
download_size: 85255
dataset_size: 78620
- config_name: dyck_languages
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38432
num_examples: 250
download_size: 43814
dataset_size: 38432
- config_name: formal_fallacies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 138224
num_examples: 250
download_size: 145562
dataset_size: 138224
- config_name: geometric_shapes
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 68560
num_examples: 250
download_size: 77242
dataset_size: 68560
- config_name: hyperbaton
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38574
num_examples: 250
download_size: 44706
dataset_size: 38574
- config_name: logical_deduction_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 148595
num_examples: 250
download_size: 155477
dataset_size: 148595
- config_name: logical_deduction_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 191022
num_examples: 250
download_size: 198404
dataset_size: 191022
- config_name: logical_deduction_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 105831
num_examples: 250
download_size: 112213
dataset_size: 105831
- config_name: movie_recommendation
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 50985
num_examples: 250
download_size: 57684
dataset_size: 50985
- config_name: multistep_arithmetic_two
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 12943
num_examples: 250
download_size: 18325
dataset_size: 12943
- config_name: navigate
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 49031
num_examples: 250
download_size: 55163
dataset_size: 49031
- config_name: object_counting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 30508
num_examples: 250
download_size: 35890
dataset_size: 30508
- config_name: penguins_in_a_table
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 70062
num_examples: 146
download_size: 74516
dataset_size: 70062
- config_name: reasoning_about_colored_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 89579
num_examples: 250
download_size: 98694
dataset_size: 89579
- config_name: ruin_names
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 46537
num_examples: 250
download_size: 53178
dataset_size: 46537
- config_name: salient_translation_error_detection
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 277110
num_examples: 250
download_size: 286443
dataset_size: 277110
- config_name: snarks
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38223
num_examples: 178
download_size: 42646
dataset_size: 38223
- config_name: sports_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 22723
num_examples: 250
download_size: 28617
dataset_size: 22723
- config_name: temporal_sequences
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 139546
num_examples: 250
download_size: 148176
dataset_size: 139546
- config_name: tracking_shuffled_objects_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 162590
num_examples: 250
download_size: 169722
dataset_size: 162590
- config_name: tracking_shuffled_objects_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 207274
num_examples: 250
download_size: 214906
dataset_size: 207274
- config_name: tracking_shuffled_objects_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 122104
num_examples: 250
download_size: 128736
dataset_size: 122104
- config_name: web_of_lies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 47582
num_examples: 250
download_size: 52964
dataset_size: 47582
- config_name: word_sorting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 60918
num_examples: 250
download_size: 66300
dataset_size: 60918
---
# BIG-bench Hard dataset
homepage: https://github.com/suzgunmirac/BIG-Bench-Hard
```
@article{suzgun2022challenging,
title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},
author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason},
journal={arXiv preprint arXiv:2210.09261},
year={2022}
}
``` |
GBaker/MedQA-USMLE-4-options | GBaker | 2023-01-24T19:18:09Z | 2,121 | 57 | [
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-01-24T19:08:56Z | 2 | ---
license: cc-by-4.0
language:
- en
---
Original dataset introduced by Jin et al. in [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://paperswithcode.com/paper/what-disease-does-this-patient-have-a-large)
<h4>Citation information:</h4>
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
|
pile-of-law/pile-of-law | pile-of-law | 2023-01-08T03:10:35Z | 2,776 | 233 | [
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"arxiv:2207.00220",
"region:us"
] | [
"fill-mask"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: pile-of-law
size_categories:
- 10M<n<100M
source_datasets: []
task_categories:
- fill-mask
task_ids:
- masked-language-modeling
viewer: false
---
# Dataset Card for Pile of Law
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/pile-of-law/pile-of-law
- **Repository:** https://huggingface.co/datasets/pile-of-law/pile-of-law
- **Paper:** https://arxiv.org/abs/2207.00220
### Dataset Summary
We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives.
### Supported Tasks and Leaderboards
See paper for details.
### Languages
Mainly English, but some other languages may appear in some portions of the data.
## Dataset Structure
### Data Instances
**courtListener_docket_entry_documents** : Docket entries in U.S. federal courts, including filed briefs from CourtListener RECAP archive.
**courtListener_opinions** : U.S. court opinions from CourtListener (synchronized as of 12/31/2022).
**atticus_contracts**: Unannotated contracts from the Atticus Project.
**federal_register**: The U.S. federal register where agencies file draft rulemaking.
**bva_opinions**: Bureau of Veterans Appeals opinions.
**us_bills**: Draft Bills from the United States Congress.
**cc_casebooks**: Educational Casebooks released under open CC licenses.
**tos**: Unannotated Terms of Service contracts.
**euro_parl**: European parliamentary debates.
**nlrb_decisions**: Decisions from the U.S. National Labor Review Board.
**scotus_oral_arguments**: U.S. Supreme Court Oral Arguments
**cfr**: U.S. Code of Federal Regulations
**state_codes**: U.S. State Codes
**scotus_filings**: Briefs and filings with the U.S. Supreme Court.
**exam_outlines**: Exam outlines available openly on the web.
**edgar**: Contracts filed with the SEC and made available on the SEC's Edgar tool.
**cfpb_creditcard_contracts**: Credit Card Contracts compiled by the U.S. Consumer Finance Protection Bureau.
**constitutions** : The World's constitutions.
**congressional_hearings** : U.S. Congressional hearing transcripts and statements.
**oig**: U.S. Office of Inspector general reports.
**olc_memos**: U.S. Office of Legal Counsel memos.
**uscode**: The United States Code (laws).
**founding_docs**: Letters from U.S. founders.
**ftc_advisory_opinions**: Advisory opinions by the Federal Trade Commission.
**echr** : European Court of Human Rights opinions.
**eurlex**: European Laws.
**tax_rulings**: Rulings from U.S. Tax court.
**un_debates**: U.N. General Debates
**fre**: U.S. Federal Rules of Evidence
**frcp** : U.S. Federal Rules of Civil Procedure
**canadian_decisions**: Canadian Court Opinions from ON and BC.
**eoir**: U.S. Executive Office for Immigration Review Immigration and Nationality Precedential Decisions
**dol_ecab**: Department of Labor Employees' Compensation Appeals Board decisions after 2006
**r_legaladvice** : Filtered data from the r/legaladvice and r/legaladviceofftopic subreddits in the format.
Title: [Post Title]
Question: [Post Content]
Topic: [Post Flair]
Answer \#[N]: [Top Answers]...
**acus_reports** : Reports from the Administrative Conference of the United States from 2010-2022.
**ed_policy_guidance** : Policy guidance documents from the U.S. Department of Education (2001-2022).
**uspto_office_actions** : Office Actions from the U.S. Patent and Trademark Office from 2019-2022.
**icj-pcij** : International Court of Justice and Permanent Court of International Justice opinions.
**hhs_alj_opinions** : Opinions from the U.S. Department of Health and Human Services Administrative Law Judges from 1985-2019.
**sec_administrative_proceedings**: Significant pleadings, orders and decisions for administrative proceedings from the U.S. Securities and Exchange Commission from 2005-2022.
**fmshrc_bluebooks**: Bluebooks from the U.S. Federal Mine Safety and Health Review Commission from 1979 (March) - 2022 (August).
**resource_contracts**: Resource Contracts collected by ResourceContracts.org
**medicaid_policy_guidance**: Policy guidance documents from the U.S. Department of Health and Human Services (1994-2022).
**irs_legal_advice_memos**: Legal Advice Memos and Chief Counsel Notices from the U.S. Internal Revenue Service.
**doj_guidance**: Guidance documents from the U.S. Department of Justice (2020-2022).
**1/23 update**: Data updated in 2023 included: syncing courtListener opinions, adding ACUS reports, USPTO office actions, Ed Policy Guidance, HHS ALJ opinions, SEC administrative proceedings, FMSHRC Bluebooks, Resource Contracts, and ICJ/PCIJ legal opinions. We also fixed OLC opinions which had some formatting inconsistencies and merged exam outlines into one file, adding some additional exam outlines.
On-disk sizes might vary due to caching and compression, but should be approximately as follows as of 1/7/2023.
```bash
% xz --list data/*.xz
Strms Blocks Compressed Uncompressed Ratio Check Filename
183 181 9,631.2 KiB 35.0 MiB 0.268 CRC64 data/train.acus_reports.jsonl.xz
1 1 1,024.1 MiB 6,804.7 MiB 0.150 CRC64 data/train.atticus_contracts.0.jsonl.xz
1 1 1,024.1 MiB 6,781.1 MiB 0.151 CRC64 data/train.atticus_contracts.1.jsonl.xz
1 1 1,024.1 MiB 6,790.1 MiB 0.151 CRC64 data/train.atticus_contracts.2.jsonl.xz
1 1 1,024.1 MiB 6,759.2 MiB 0.152 CRC64 data/train.atticus_contracts.3.jsonl.xz
1 1 139.9 MiB 925.0 MiB 0.151 CRC64 data/train.atticus_contracts.4.jsonl.xz
1 1 1,564.6 MiB 12.5 GiB 0.123 CRC64 data/train.bva.jsonl.xz
1 1 29.8 MiB 154.3 MiB 0.193 CRC64 data/train.canadian_decisions.jsonl.xz
1 1 18.5 MiB 82.6 MiB 0.224 CRC64 data/train.cc_casebooks.jsonl.xz
1 1 3,427.3 KiB 67.2 MiB 0.050 CRC64 data/train.cfpb_cc.jsonl.xz
1 1 72.7 MiB 582.6 MiB 0.125 CRC64 data/train.cfr.jsonl.xz
1 1 1,056.1 MiB 4,941.9 MiB 0.214 CRC64 data/train.congressional_hearings.jsonl.xz
1 1 3,272.4 KiB 21.3 MiB 0.150 CRC64 data/train.constitutions.jsonl.xz
1 1 1,024.1 MiB 13.0 GiB 0.077 CRC64 data/train.courtlistenerdocketentries.0.jsonl.xz
1 1 1,024.3 MiB 13.3 GiB 0.075 CRC64 data/train.courtlistenerdocketentries.1.jsonl.xz
1 1 1,024.1 MiB 12.4 GiB 0.080 CRC64 data/train.courtlistenerdocketentries.2.jsonl.xz
1 1 635.2 MiB 8,671.6 MiB 0.073 CRC64 data/train.courtlistenerdocketentries.3.jsonl.xz
1 1 953.7 MiB 4,575.7 MiB 0.208 CRC64 data/train.courtlisteneropinions.0.jsonl.xz
1 1 953.7 MiB 4,356.2 MiB 0.219 CRC64 data/train.courtlisteneropinions.1.jsonl.xz
1 1 953.7 MiB 4,315.6 MiB 0.221 CRC64 data/train.courtlisteneropinions.10.jsonl.xz
1 1 953.7 MiB 4,650.3 MiB 0.205 CRC64 data/train.courtlisteneropinions.11.jsonl.xz
1 1 953.7 MiB 4,836.3 MiB 0.197 CRC64 data/train.courtlisteneropinions.12.jsonl.xz
1 1 953.7 MiB 4,644.9 MiB 0.205 CRC64 data/train.courtlisteneropinions.13.jsonl.xz
1 1 953.7 MiB 4,657.5 MiB 0.205 CRC64 data/train.courtlisteneropinions.14.jsonl.xz
1 1 539.2 MiB 2,621.8 MiB 0.206 CRC64 data/train.courtlisteneropinions.15.jsonl.xz
1 1 953.7 MiB 4,335.3 MiB 0.220 CRC64 data/train.courtlisteneropinions.2.jsonl.xz
1 1 953.7 MiB 4,352.0 MiB 0.219 CRC64 data/train.courtlisteneropinions.3.jsonl.xz
1 1 953.7 MiB 4,575.9 MiB 0.208 CRC64 data/train.courtlisteneropinions.4.jsonl.xz
1 1 953.7 MiB 4,382.6 MiB 0.218 CRC64 data/train.courtlisteneropinions.5.jsonl.xz
1 1 953.7 MiB 4,352.3 MiB 0.219 CRC64 data/train.courtlisteneropinions.6.jsonl.xz
1 1 953.7 MiB 4,462.4 MiB 0.214 CRC64 data/train.courtlisteneropinions.7.jsonl.xz
1 1 953.7 MiB 4,604.0 MiB 0.207 CRC64 data/train.courtlisteneropinions.8.jsonl.xz
1 1 953.7 MiB 4,612.0 MiB 0.207 CRC64 data/train.courtlisteneropinions.9.jsonl.xz
335 335 6,047.4 KiB 24.1 MiB 0.245 CRC64 data/train.doj_guidance.jsonl.xz
1 1 41.1 MiB 305.6 MiB 0.135 CRC64 data/train.dol_ecab.jsonl.xz
1 1 19.1 MiB 100.5 MiB 0.190 CRC64 data/train.echr.jsonl.xz
508 507 1,502.0 KiB 4,716.7 KiB 0.318 CRC64 data/train.ed_policy_guidance.jsonl.xz
1 1 1,372.0 MiB 9,032.6 MiB 0.152 CRC64 data/train.edgar.jsonl.xz
1 1 3,896.6 KiB 18.6 MiB 0.205 CRC64 data/train.eoir.jsonl.xz
1 1 140.3 MiB 1,154.7 MiB 0.121 CRC64 data/train.eurlex.jsonl.xz
1 1 51.4 MiB 239.4 MiB 0.215 CRC64 data/train.euro_parl.jsonl.xz
1 1 355.3 KiB 1,512.5 KiB 0.235 CRC64 data/train.examoutlines.jsonl.xz
1 1 20.7 MiB 131.7 MiB 0.157 CRC64 data/train.federal_register.jsonl.xz
396 396 43.9 MiB 175.7 MiB 0.250 CRC64 data/train.fmshrc.jsonl.xz
1 1 73.4 MiB 341.7 MiB 0.215 CRC64 data/train.founding_docs.jsonl.xz
1 1 324.2 KiB 1,459.4 KiB 0.222 CRC64 data/train.frcp.jsonl.xz
1 1 116.1 KiB 484.9 KiB 0.239 CRC64 data/train.fre.jsonl.xz
1 1 297.3 KiB 1,245.0 KiB 0.239 CRC64 data/train.ftc_advisory_opinions.jsonl.xz
2,084 2,083 13.4 MiB 42.2 MiB 0.318 CRC64 data/train.hhs_alj.jsonl.xz
1 1 29.5 MiB 157.4 MiB 0.188 CRC64 data/train.ijc.jsonl.xz
442 442 7,904.4 KiB 35.8 MiB 0.216 CRC64 data/train.irs_legal_advice_memos.jsonl.xz
658 658 3,403.1 KiB 10.6 MiB 0.314 CRC64 data/train.medicaid_policy_guidance.jsonl.xz
1 1 170.7 MiB 788.9 MiB 0.216 CRC64 data/train.nlrb_decisions.jsonl.xz
1 1 218.4 MiB 1,580.3 MiB 0.138 CRC64 data/train.oig.jsonl.xz
1 1 5,857.4 KiB 31.5 MiB 0.182 CRC64 data/train.olc_memos.jsonl.xz
1 1 58.6 MiB 234.5 MiB 0.250 CRC64 data/train.r_legaldvice.jsonl.xz
1,639 1,639 43.7 MiB 188.1 MiB 0.232 CRC64 data/train.resource_contracts.jsonl.xz
1 1 242.6 MiB 1,241.6 MiB 0.195 CRC64 data/train.scotus_docket_entries.jsonl.xz
1 1 68.5 MiB 323.2 MiB 0.212 CRC64 data/train.scotus_oral.jsonl.xz
10,805 10,805 40.7 MiB 118.4 MiB 0.344 CRC64 data/train.sec.jsonl.xz
1 1 705.0 MiB 5,019.9 MiB 0.140 CRC64 data/train.state_code.jsonl.xz
1 1 75.2 MiB 540.8 MiB 0.139 CRC64 data/train.taxrulings.jsonl.xz
1 1 273.6 KiB 1,318.5 KiB 0.207 CRC64 data/train.tos.jsonl.xz
1 1 22.6 MiB 108.1 MiB 0.209 CRC64 data/train.undebates.jsonl.xz
1 1 167.6 MiB 1,119.6 MiB 0.150 CRC64 data/train.us_bills.jsonl.xz
1 1 25.3 MiB 196.1 MiB 0.129 CRC64 data/train.uscode.jsonl.xz
1 1 1,713.2 MiB 33.7 GiB 0.050 CRC64 data/train.uspto_oab.jsonl.xz
54 54 2,960.9 KiB 11.0 MiB 0.264 CRC64 data/validation.acus_reports.jsonl.xz
1 1 1,024.1 MiB 6,797.1 MiB 0.151 CRC64 data/validation.atticus_contracts.0.jsonl.xz
1 1 374.6 MiB 2,471.7 MiB 0.152 CRC64 data/validation.atticus_contracts.1.jsonl.xz
1 1 523.0 MiB 4,258.9 MiB 0.123 CRC64 data/validation.bva.jsonl.xz
1 1 9.8 MiB 50.5 MiB 0.195 CRC64 data/validation.canadian_decisions.jsonl.xz
1 1 4,281.5 KiB 19.1 MiB 0.219 CRC64 data/validation.cc_casebooks.jsonl.xz
1 1 1,532.6 KiB 19.6 MiB 0.077 CRC64 data/validation.cfpb_cc.jsonl.xz
1 1 23.3 MiB 190.4 MiB 0.122 CRC64 data/validation.cfr.jsonl.xz
1 1 347.4 MiB 1,620.7 MiB 0.214 CRC64 data/validation.congressional_hearings.jsonl.xz
1 1 1,102.4 KiB 6,733.0 KiB 0.164 CRC64 data/validation.constitutions.jsonl.xz
1 1 1,024.1 MiB 10.7 GiB 0.094 CRC64 data/validation.courtlistenerdocketentries.0.jsonl.xz
1 1 473.7 MiB 5,225.2 MiB 0.091 CRC64 data/validation.courtlistenerdocketentries.1.jsonl.xz
1 1 953.7 MiB 4,391.3 MiB 0.217 CRC64 data/validation.courtlisteneropinions.0.jsonl.xz
1 1 953.7 MiB 4,406.9 MiB 0.216 CRC64 data/validation.courtlisteneropinions.1.jsonl.xz
1 1 953.8 MiB 4,436.7 MiB 0.215 CRC64 data/validation.courtlisteneropinions.2.jsonl.xz
1 1 953.7 MiB 4,476.9 MiB 0.213 CRC64 data/validation.courtlisteneropinions.3.jsonl.xz
1 1 953.7 MiB 4,618.0 MiB 0.207 CRC64 data/validation.courtlisteneropinions.4.jsonl.xz
1 1 238.5 MiB 1,147.4 MiB 0.208 CRC64 data/validation.courtlisteneropinions.5.jsonl.xz
100 100 1,778.7 KiB 7,371.5 KiB 0.241 CRC64 data/validation.doj_guidance.jsonl.xz
1 1 13.8 MiB 101.5 MiB 0.136 CRC64 data/validation.dol_ecab.jsonl.xz
1 1 4,132.1 KiB 20.8 MiB 0.194 CRC64 data/validation.echr.jsonl.xz
174 173 490.5 KiB 1,564.9 KiB 0.313 CRC64 data/validation.ed_policy_guidance.jsonl.xz
1 1 453.6 MiB 2,978.9 MiB 0.152 CRC64 data/validation.edgar.jsonl.xz
1 1 1,340.0 KiB 6,294.8 KiB 0.213 CRC64 data/validation.eoir.jsonl.xz
1 1 49.1 MiB 393.7 MiB 0.125 CRC64 data/validation.eurlex.jsonl.xz
1 1 17.0 MiB 79.0 MiB 0.215 CRC64 data/validation.euro_parl.jsonl.xz
1 1 103.7 KiB 547.9 KiB 0.189 CRC64 data/validation.examoutlines.jsonl.xz
1 1 7,419.0 KiB 45.7 MiB 0.158 CRC64 data/validation.federal_register.jsonl.xz
120 120 13.5 MiB 53.9 MiB 0.250 CRC64 data/validation.fmshrc.jsonl.xz
1 1 25.3 MiB 113.2 MiB 0.224 CRC64 data/validation.founding_docs.jsonl.xz
1 1 63.5 KiB 248.8 KiB 0.255 CRC64 data/validation.frcp.jsonl.xz
1 1 58.4 KiB 226.7 KiB 0.257 CRC64 data/validation.fre.jsonl.xz
1 1 117.4 KiB 419.1 KiB 0.280 CRC64 data/validation.ftc_advisory_opinions.jsonl.xz
722 721 4,900.2 KiB 15.1 MiB 0.318 CRC64 data/validation.hhs_alj.jsonl.xz
1 1 10.0 MiB 52.3 MiB 0.191 CRC64 data/validation.ijc.jsonl.xz
161 161 3,791.0 KiB 17.7 MiB 0.209 CRC64 data/validation.irs_legal_advice_memos.jsonl.xz
214 214 1,101.1 KiB 3,411.1 KiB 0.323 CRC64 data/validation.medicaid_policy_guidance.jsonl.xz
1 1 55.8 MiB 257.8 MiB 0.217 CRC64 data/validation.nlrb_decisions.jsonl.xz
1 1 80.0 MiB 603.7 MiB 0.132 CRC64 data/validation.oig.jsonl.xz
1 1 1,826.2 KiB 9,874.6 KiB 0.185 CRC64 data/validation.olc_memos.jsonl.xz
1 1 19.7 MiB 78.7 MiB 0.251 CRC64 data/validation.r_legaldvice.jsonl.xz
584 584 15.3 MiB 63.5 MiB 0.241 CRC64 data/validation.resource_contracts.jsonl.xz
1 1 86.4 MiB 422.5 MiB 0.204 CRC64 data/validation.scotus_docket_entries.jsonl.xz
1 1 23.1 MiB 109.0 MiB 0.212 CRC64 data/validation.scotus_oral.jsonl.xz
3,559 3,559 13.0 MiB 37.7 MiB 0.344 CRC64 data/validation.sec.jsonl.xz
1 1 371.8 MiB 2,678.4 MiB 0.139 CRC64 data/validation.state_code.jsonl.xz
1 1 24.8 MiB 177.4 MiB 0.140 CRC64 data/validation.taxrulings.jsonl.xz
1 1 92.7 KiB 381.6 KiB 0.243 CRC64 data/validation.tos.jsonl.xz
1 1 7,705.6 KiB 35.5 MiB 0.212 CRC64 data/validation.undebates.jsonl.xz
1 1 53.8 MiB 356.3 MiB 0.151 CRC64 data/validation.us_bills.jsonl.xz
1 1 15.2 MiB 117.5 MiB 0.129 CRC64 data/validation.uscode.jsonl.xz
1 1 885.5 MiB 11.2 GiB 0.077 CRC64 data/validation.uspto_oab.jsonl.xz
-------------------------------------------------------------------------------
22,839 22,833 41.0 GiB 291.5 GiB 0.141 CRC64 119 files
```
### Data Fields
- text: the document text
- created_timestamp: If the original source provided a timestamp when the document was created we provide this as well. Note, these may be inaccurate. For example CourtListener case opinions provide the timestamp of when it was uploaded to CourtListener not when the opinion was published. We welcome pull requests to correct this field if such inaccuracies are discovered.
- downloaded_timestamp: When the document was scraped.
- url: the source url
### Data Splits
There is a train/validation split for each subset of the data. 75%/25%. Note, we do not use the validation set for any downstream tasks nor do we filter out any data from downstream tasks. Please filter as needed before training models or feel free to use a different dataset split.
## Dataset Creation
### Curation Rationale
We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives. As such, data sources are curated to inform: (1) legal analysis, knowledge, or understanding; (2) argument formation; (3) privacy filtering standards. Sources like codes and laws tend to inform (1). Transcripts and court filings tend to inform (2). Opinions tend to inform (1) and (3).
### Source Data
#### Initial Data Collection and Normalization
We do not normalize the data, but we provide dataset creation code and relevant urls in https://github.com/Breakend/PileOfLaw
#### Who are the source language producers?
Varied (see sources above).
### Personal and Sensitive Information
This dataset may contain personal and sensitive information. However, this has been previously filtered by the relevant government and federal agencies that weigh the harms of revealing this information against the benefits of transparency. If you encounter something particularly harmful, please file a takedown request with the upstream source and notify us in the communities tab. We will then remove the content. We cannot enable more restrictive licensing because upstream sources may restrict using a more restrictive license. However, we ask that all users of this data respect the upstream licenses and restrictions. Per the standards of CourtListener, we do not allow indexing of this data by search engines and we ask that others do not also. Please do not turn on anything that allows the data to be easily indexed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset will provide more mechanisms for doing data work. As we describe in the paper, the internal variation allows contextual privacy rules to be learned. If robust mechanisms for this are developed they can applied more broadly. This dataset can also potentially be used for legal language model pretraining. As discussed in ``On the Opportunities and Risks of Foundation Models'', legal language models can help improve access to justice in various ways. But they can also be used in potentially harmful ways. While such models are not ready for most production environments and are the subject of significant research, we ask that model creators using this data, particularly when creating generative models, consider the impacts of their model and make a good faith effort to weigh the benefits against the harms of their method. Our license and many of the sub-licenses also restrict commercial usage.
### Discussion of Biases
The data reflects the biases of governments and courts. As we discuss in our work, these can be significant, though more recent text will likely be less overtly toxic. Please see the above statement and embark on any model uses responsibly.
### Other Known Limitations
We mainly focus on U.S. and English-speaking legal sources, though we include some European and Canadian resources.
## Additional Information
### Licensing Information
CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International. But individual sources may have other licenses. See paper for details. Some upstream data sources request that indexing be disabled. As such please **do not re-host any data in a way that can be indexed by search engines.**
### No Representations
We do not make any representation that the legal information provided here is accurate. It is meant for research purposes only. For the authoritative and updated source of information please refer directly to the governing body which provides the latest laws, rules, and regulations relevant to you.
### DMCA Takedown Requests
Pile of Law follows the notice and takedown procedures in the Digital Millennium Copyright Act (DMCA), 17 U.S.C. Section 512.
If you believe content on Pile of Law violates your copyright, please immediately notify its operators by sending a message with the information described below. Please use the subject "Copyright" in your message. If Pile of Law's operators act in response to an infringement notice, they will make a good-faith attempt to contact the person who contributed the content using the most recent email address that person provided to Pile of Law.
Under the DMCA, you may be held liable for damages based on material misrepresentations in your infringement notice. You must also make a good-faith evaluation of whether the use of your content is a fair use, because fair uses are not infringing. See 17 U.S.C. Section 107 and Lenz v. Universal Music Corp., No. 13-16106 (9th Cir. Sep. 14, 2015). If you are not sure if the content you want to report infringes your copyright, you should first contact a lawyer.
The DMCA requires that all infringement notices must include all of the following:
+ A signature of the copyright owner or a person authorized to act on the copyright owner's behalf
+ An identification of the copyright claimed to have been infringed
+ A description of the nature and location of the material that you claim to infringe your copyright, in sufficient detail to allow Pile of Law to find and positively identify that material
+ Your name, address, telephone number, and email address
+ A statement that you believe in good faith that the use of the material that you claim to infringe your copyright is not authorized by law, or by the copyright owner or such owner's agent
+ A statement, under penalty of perjury, that all of the information contained in your infringement notice is accurate
+ A statement, under penalty of perjury, that you are either the copyright owner or a person authorized to act on their behalf.
Pile of Law will respond to all DMCA-compliant infringement notices, including, as required or appropriate, by removing the offending material or disabling all links to it.
All received infringement notices may be posted in full to the Lumen database (previously known as the Chilling Effects Clearinghouse).
All takedown requests with the above information should be posted to the Communities tab.
This removal notice has been modified from the (CourtListener DMCA takedown notice)[https://www.courtlistener.com/terms/].
### Citation Information
For a citation to this work:
```
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson*, Peter and Krass*, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
```
Since this dataset also includes several other data sources with citations, please refer to our paper and cite the additional relevant work in addition to our own work. |
irds/clueweb09 | irds | 2023-01-05T02:54:31Z | 16 | 1 | [
"task_categories:text-retrieval",
"region:us"
] | [
"text-retrieval"
] | 2023-01-05T02:54:25Z | 1 | ---
pretty_name: '`clueweb09`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `clueweb09`
The `clueweb09` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,040,859,705
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
neulab/tldr | neulab | 2022-12-22T19:47:11Z | 59 | 12 | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:code",
"license:mit",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2207.05987",
"region:us",
"code-generation",
"doc retrieval",
"retrieval augmented generation"
] | [
"text2text-generation"
] | 2022-12-22T17:58:43Z | 1 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- mit
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: DocPrompting-CoNaLa
tags:
- code-generation
- doc retrieval
- retrieval augmented generation
---
## Dataset Description
- **Repository:** https://github.com/shuyanzhou/docprompting
- **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf)
### Dataset Summary
This is the natural language to bash generation dataset we harvested from the English subset of [`tldr`](https://github.com/tldr-pages/tldr)
We split the dataset by bash commands. Every command in the dev and test set is held out from the training set.
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Bash
## Dataset Structure
```python
dataset = load_dataset("neulab/tldr")
DatasetDict({
train: Dataset({
features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
num_rows: 6414
})
test: Dataset({
features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
num_rows: 928
})
validation: Dataset({
features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
num_rows: 1845
})
})
code_docs = load_dataset("neulab/docprompting-conala", "docs")
DatasetDict({
train: Dataset({
features: ['doc_id', 'doc_content'],
num_rows: 439064
})
})
```
### Data Fields
train/dev/test:
- nl: The natural language intent
- cmd: The reference code snippet
- question_id: the unique id of a question
- oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split
- cmd_name: the bash command of this code snippet
- tldr_cmd_name: the bash command used in tldr github repo. The `cmd_name` and `tldr_cmd_name` can be different due to naming difference
- manual_exist: whether the manual exists in https://manned.org
- matching_info: each code snippets have multiple tokens, this is the detailed reference doc matching on each token.
docs:
- doc_id: the id of a doc
- doc_content: the content of the doc
## Dataset Creation
The dataset was curated from [`tldr`](https://github.com/tldr-pages/tldr).
The project aims to provide frequent usage of bash commands with natural language intents.
For more details, please check the repo.
### Citation Information
```
@article{zhou2022doccoder,
title={DocCoder: Generating Code by Retrieving and Reading Docs},
author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and Jiang, Zhengbao and Neubig, Graham},
journal={arXiv preprint arXiv:2207.05987},
year={2022}
}
``` |
tarekeldeeb/ArabicCorpus2B | tarekeldeeb | 2022-12-14T11:17:34Z | 6 | 1 | [
"license:other",
"region:us"
] | [] | 2022-12-14T10:03:09Z | 1 | ---
license: other
---
```
BUILDING VOCABULARY
Processed 1754541204 tokens.
Counted 5329509 unique words.
Truncating vocabulary at min count 5.
Using vocabulary of size 1539115.
```
---
# Build the Arabic Corpus
#### Dowload Resources
The arabic corpus {1.9B word} consists of the following resources:
- ShamelaLibrary348.7z [link](https://www.quran.tv/ketab/ShamelaLibrary348.7z) {1.15B}
- UN arabic corpus [mirror1](http://lotus.kuee.kyoto-u.ac.jp/~raj/rajwindroot/corpora_downloads/UN_CORPUS/UNv1.0.6way.ar.txt) [mirror2](http://corpus.leeds.ac.uk/bogdan/resources/UN-corpus/6way/UNv1.0.6way.ar.txt) {0.37B}
- AraCorpus.tar.gz [link](http://aracorpus.e3rab.com/argistestsrv.nmsu.edu/AraCorpus.tar.gz) {0.14B}
- Arabic Wikipedia Latest Articles Dump [link](https://dumps.wikimedia.org/arwiki/latest/arwiki-latest-pages-articles.xml.bz2) {0.11B}
- Tashkeela-arabic-diacritized-text-utf8-0.3.zip [link](https://netix.dl.sourceforge.net/project/tashkeela/) {0.07B}
- Arabic Tweets [link](https://github.com/bakrianoo/Datasets) {0.03B}
- watan-2004.7z [link](https://netix.dl.sourceforge.net/project/arabiccorpus/watan-2004corpus/watan-2004.7z) {0.01B}
#### Build Script: https://github.com/tarekeldeeb/GloVe-Arabic/tree/master/arabic_corpus
---
# Download the dataset
Mirror : https://archive.org/details/arabic_corpus
---
license: Waqf v2 (https://github.com/ojuba-org/waqf/tree/master/2.0) |
ziyou-li/cantonese_daily | ziyou-li | 2022-12-08T22:36:23Z | 417 | 1 | [
"license:cc-by-nc-nd-4.0",
"size_categories:1K<n<10K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2022-12-08T21:12:33Z | 1 | ---
license: cc-by-nc-nd-4.0
---
|
MLCommons/ml_spoken_words | MLCommons | 2022-12-06T11:11:02Z | 1,211 | 28 | [
"task_categories:audio-classification",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"language:ar",
"language:as",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fr",
"language:fy",
"language:ga",
"language:gn",
"language:ha",
"language:ia",
"language:id",
"language:it",
"language:ka",
"language:ky",
"language:lt",
"language:lv",
"language:mn",
"language:mt",
"language:nl",
"language:or",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sk",
"language:sl",
"language:sv",
"language:ta",
"language:tr",
"language:tt",
"language:uk",
"language:vi",
"language:zh",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"region:us",
"other-keyword-spotting"
] | [
"audio-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ga
- gn
- ha
- ia
- id
- it
- ka
- ky
- lt
- lv
- mn
- mt
- nl
- or
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sk
- sl
- sv
- ta
- tr
- tt
- uk
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- extended|common_voice
task_categories:
- audio-classification
task_ids: []
pretty_name: Multilingual Spoken Words
language_bcp47:
- fy-NL
- ga-IE
- rm-sursilv
- rm-vallader
- sv-SE
- zh-CN
tags:
- other-keyword-spotting
---
# Dataset Card for Multilingual Spoken Words
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/multilingual-spoken-words/
- **Repository:** https://github.com/harvard-edge/multilingual_kws
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken
words in 50 languages collectively spoken by over 5 billion people, for academic
research and commercial applications in keyword spotting and spoken term search,
licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords,
totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset
has many use cases, ranging from voice-enabled consumer devices to call center
automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level
audio to produce per-word timing estimates for extraction.
All alignments are included in the dataset.
Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like
`"{lang}_{format}"`, so to load, for example, Tatar in wav format do:
```python
ds = load_dataset("MLCommons/ml_spoken_words", "tt_wav")
```
To download multiple languages in a single dataset pass list of languages to `languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
To download a specific format pass it to the `format` argument (default format is `wav`):
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"], format="opus")
```
Note that each time you provide different sets of languages,
examples are generated from scratch even if you already provided one or several of them before
because custom configurations are created each time (the data is **not** redownloaded though).
### Supported Tasks and Leaderboards
Keyword spotting, Spoken term search
### Languages
The dataset is multilingual. To specify several languages to download pass a list of them to the
`languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
The dataset contains data for the following languages:
Low-resourced (<10 hours):
* Arabic (0.1G, 7.6h)
* Assamese (0.9M, 0.1h)
* Breton (69M, 5.6h)
* Chuvash (28M, 2.1h)
* Chinese (zh-CN) (42M, 3.1h)
* Dhivehi (0.7M, 0.04h)
* Frisian (0.1G, 9.6h)
* Georgian (20M, 1.4h)
* Guarani (0.7M, 1.3h)
* Greek (84M, 6.7h)
* Hakha Chin (26M, 0.1h)
* Hausa (90M, 1.0h)
* Interlingua (58M, 4.0h)
* Irish (38M, 3.2h)
* Latvian (51M, 4.2h)
* Lithuanian (21M, 0.46h)
* Maltese (88M, 7.3h)
* Oriya (0.7M, 0.1h)
* Romanian (59M, 4.5h)
* Sakha (42M, 3.3h)
* Slovenian (43M, 3.0h)
* Slovak (31M, 1.9h)
* Sursilvan (61M, 4.8h)
* Tamil (8.8M, 0.6h)
* Vallader (14M, 1.2h)
* Vietnamese (1.2M, 0.1h)
Medium-resourced (>10 & <100 hours):
* Czech (0.3G, 24h)
* Dutch (0.8G, 70h)
* Estonian (0.2G, 19h)
* Esperanto (1.3G, 77h)
* Indonesian (0.1G, 11h)
* Kyrgyz (0.1G, 12h)
* Mongolian (0.1G, 12h)
* Portuguese (0.7G, 58h)
* Swedish (0.1G, 12h)
* Tatar (4G, 30h)
* Turkish (1.3G, 29h)
* Ukrainian (0.2G, 18h)
Hig-resourced (>100 hours):
* Basque (1.7G, 118h)
* Catalan (8.7G, 615h)
* English (26G, 1957h)
* French (9.3G, 754h)
* German (14G, 1083h)
* Italian (2.2G, 155h)
* Kinyarwanda (6.1G, 422h)
* Persian (4.5G, 327h)
* Polish (1.8G, 130h)
* Russian (2.1G, 137h)
* Spanish (4.9G, 349h)
* Welsh (4.5G, 108h)
## Dataset Structure
### Data Instances
```python
{'file': 'абзар_common_voice_tt_17737010.opus',
'is_valid': True,
'language': 0,
'speaker_id': '687025afd5ce033048472754c8d2cb1cf8a617e469866bbdb3746e2bb2194202094a715906f91feb1c546893a5d835347f4869e7def2e360ace6616fb4340e38',
'gender': 0,
'keyword': 'абзар',
'audio': {'path': 'абзар_common_voice_tt_17737010.opus',
'array': array([2.03458695e-34, 2.03458695e-34, 2.03458695e-34, ...,
2.03458695e-34, 2.03458695e-34, 2.03458695e-34]),
'sampling_rate': 48000}}
```
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance. Makes sense only when providing multiple languages to the
dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`)
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data comes form Common Voice dataset.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
he dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
### Citation Information
```
@inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
Jzuluaga/uwb_atcc | Jzuluaga | 2022-12-05T11:15:20Z | 237 | 6 | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2203.16822",
"arxiv:2211.04054",
"region:us",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"noisy-speech-recognition",
"speech-recognition"
] | [
"automatic-speech-recognition"
] | 2022-11-28T07:12:02Z | 1 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 140620332.25
num_examples: 2822
- name: train
num_bytes: 608597323.625
num_examples: 11291
download_size: 711464914
dataset_size: 749217655.875
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- noisy-speech-recognition
- speech-recognition
task_categories:
- automatic-speech-recognition
language:
- en
multilinguality:
- monolingual
license:
- cc-by-nc-sa-4.0
---
# Dataset Card for UWB-ATCC corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [UWB-ATCC corpus homepage](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development](https://link.springer.com/article/10.1007/s10579-019-09449-5)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The UWB-ATCC Corpus is provided provided by University of West Bohemia, Department of Cybernetics. The corpus contains recordings of communication between air traffic controllers and pilots. The speech is manually transcribed and labeled with the information about the speaker (pilot/controller, not the full identity of the person). The corpus is currently small (20 hours) but we plan to search for additional data next year. The audio data format is: 8kHz, 16bit PCM, mono.
Important, from the `<id (string)>` field, you can obtain the speaker roles. For instance:
- `_PI`: segment with only pilot speech
- `_AT`: segment with only ATCO speech
- `PIAT`: segment with both, ATCO and pilot speech
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The authors took advantage of the fact that one of their industrial partners develops complex IT solutions for several ATC authorities and airports and, as such, has access to the ATC communication recordings collected in the Czech airspace. This partner was able to secure the following data:
- Ground control—communication before takeoff and after landing—19.2 h of data.
- Tower control—communication during takeoff, landing and landing standby—22.5 h.
- Approach control—communication during landing approach—25.5 h.
- Area control—communication during overflights and cruises—71.3 h.
(Not all data is released. Check their website [here](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0))
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [UWB-ATCC corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) creators.
They used [Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) licensing.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@article{vsmidl2019air,
title={Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development},
author={{\v{S}}m{\'\i}dl, Lubo{\v{s}} and {\v{S}}vec, Jan and Tihelka, Daniel and Matou{\v{s}}ek, Jind{\v{r}}ich and Romportl, Jan and Ircing, Pavel},
journal={Language Resources and Evaluation},
volume={53},
number={3},
pages={449--464},
year={2019},
publisher={Springer}
}
```
|
kmfoda/booksum | kmfoda | 2022-11-30T12:03:43Z | 1,407 | 59 | [
"license:bsd-3-clause",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2105.08209",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
license:
- bsd-3-clause
train-eval-index:
- config: kmfoda--booksum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
---
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). |
Atomi/sem_eval_2013_task_7 | Atomi | 2022-11-17T01:43:44Z | 61 | 1 | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"asag",
"short-answer",
"grading",
"semantic-similarity"
] | [
"text-classification"
] | 2022-11-10T10:58:26Z | 1 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc
multilinguality:
- monolingual
pretty_name: semeval-task-7-2013
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- asag
- short-answer
- grading
- semantic-similarity
task_categories:
- text-classification
task_ids:
- natural-language-inference
dataset_info:
features:
- name: split
dtype: string
- name: classification_type
dtype: string
- name: corpus
dtype: string
- name: test_set
dtype: string
- name: question_qtype
dtype: string
- name: question_id
dtype: string
- name: question_module
dtype: string
- name: question_stype
dtype: string
- name: question
dtype: string
- name: reference_answer_quality
dtype: string
- name: reference_answer_id
dtype: string
- name: reference_answer_file_id
dtype: string
- name: reference_answer
dtype: string
- name: student_answer_count
dtype: float64
- name: student_answer_match
dtype: string
- name: student_answer_id
dtype: string
- name: student_answer_label
dtype: string
- name: student_answer
dtype: string
- name: label_5way
dtype: string
splits:
- name: test
num_bytes: 11688998
num_examples: 23656
- name: train
num_bytes: 23544814
num_examples: 47866
download_size: 1488533
dataset_size: 35233812
---
# Dataset Card for SemEval 2013 Task 7 Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset contains responses to questions from two distinct corpuses, _BEETLE_ and _SCIENTSBANK_. The _BEETLE_ corpus consists of 56 questions in an electricity and circuits domain, requiring answers of 1-2 sentences and containing approximately 3000 answers. The _SCIENTSBANK_ corpus consists of 197 questions in 15 different science domains, containing approximately 10000 answers. _BEETLE_ contains up to 6 reference answers of differing quality for each question, while _SCIENTSBANK_ contains only one.
The dataset was originally published as part of an open source competition. It was [introduced by Dzikovska in this paper](https://aclanthology.org/S13-2045.pdf), however it was difficult to find the official version of the data in 2022. It was eventually [found on Kaggle at this link](https://www.kaggle.com/datasets/smiles28/semeval-2013-2-and-3-way) and it is these XML files that are used here.
The XML is essentially preprocessed to combine all separate files into one single dataframe, containing all metadata.
The Kaggle dataset only contains the 2 and 3 way labels for each data point. [An additional Github repository](https://github.com/ashudeep/Student-Response-Analysis) was found which contains the original 5-way labels for the _BEETLE_ subset, and can be joined to the data (explained below).
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
The data is tabular containing 19 columns, which is each piece of information that was contained in the original XML files expanded into dataframe format. The _BEETLE_ corpus contains 56 unique questions and approximately 3000 answers, while _SCIENTSBANK_ contains 197 unique questions and approximately 10,000 answers.
Each question in the _BEETLE_ dataset can contain between 1 and 6 Reference Answers. These answers are of differing quality, and can be either 'MINIMAL', 'GOOD' or 'BEST'. In cases where multiple reference answers are provided, each student answer is joined to each reference answer. ie. for a given question with reference answers `A`, `B` and `C`, and student answers `1`, `2`, `3`, `4`, all responses for this question would be formatted as follows in the dataframe:
| reference_answer | student_answer |
| ---------------- | -------------- |
| A | 1 |
| A | 2 |
| A | 3 |
| A | 4 |
| B | 1 |
| B | 2 |
| B | 3 |
| B | 4 |
| C | 1 |
| C | 2 |
| C | 3 |
| C | 4 |
So, each student answer is joined to each reference answer. This results in _BEETLE_ contributing more rows to the final dataset than _SCIENTSBANK_, because _SCIENTSBANK_ contains only one reference answer per question.
### Data Instances
The data is in csv format. A single example from the data looks like the following:
| split | classification_type | corpus | test_set | question_qtype | question_id | question_module | question_stype | question | reference_answer_quality | reference_answer_id | reference_answer_file_id | reference_answer | student_answer_count | student_answer_match | student_answer_id | student_answer_label | student_answer | label_5way |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
|test | 2way | beetle | unseen-answers | Q_EXPLAIN_SPECIFIC | HYBRID_BURNED_OUT_EXPLAIN_Q1 | SwitchesBulbsParallel | PREDICT | Explain your reasoning. | BEST | answer371 | HYBRID_BURNED_OUT_EXPLAIN_Q1_ANS1 | If bulb A burns out, B and C are no longer in a closed path with the battery | 1 | | SwitchesBulbsParallel-HYBRID_BURNED_OUT_EXPLAIN_Q1.sbj15-l2.qa123 | incorrect | because the paths will still be closed | non_domain |
### Data Fields
- 'split': string, the set that the response belongs to, either 'training' or 'test'
- 'classification_type': string, whether the classification was '2way' or '3way'
- 'corpus': string, the corpus the question belongs to, either 'beetle' or 'scientsbank'
- 'test_set': string, the part of the test set it belongs to (if it belongs to one), either 'test-unseen-answers', 'test-unseen-domains' (scientsbank only), 'test-unseen-questions'
- 'question_qtype',: string (beetle only), the type of question
- 'question_id': string, the question id
- 'question_module': string, the question module
- 'question_stype': string (beetle only) unsure
- 'question': string, the question text
- 'reference_answer_quality': string (beetle only), the type of reference answer. Can be 'MINIMAL', 'GOOD' or 'BEST'
- 'reference_answer_id': string, the reference answer id
- 'reference_answer_file_id': string, the reference answer file id
- 'reference_answer': string, the reference answer text
- 'student_answer_count': string, unknown meaning
- 'student_answer_match': string, unknown meaning
- 'student_answer_id': string, the student answer id
- 'student_answer_label': string, the label given to the answer. In 2-way, it is 'CORRECT' or 'INCORRECT'. In 3-way, it is 'CORRECT', 'INCORRECT' or 'CONTRADICTORY'
- 'student_answer': string, the student answer text
- 'label_5way': string (beetle only), contains the original 5-way classification of the student answer. Can be 'CORRECT', 'PARTIALLY_CORRECT_INCOMPLETE', 'CONTRADICTORY', 'IRRELEVANT', 'NON_DOMAIN'
### Data Splits
The data was pre-split at the time of acquisition.
The test set is comprised of unseen answers, unseen questions, and for _SCIENTSBANK_, unseen domains (since there are multiple domains).
## Dataset Creation
### Curation Rationale
The dataset is to be used for fine tuning and benchmarking automarking models. It is one of the cardinal datasets in the ASAG literature, so it enables us to compare our results to existing work.
### Source Data
The data was sourced [from this Kaggle link](https://www.kaggle.com/datasets/smiles28/semeval-2013-2-and-3-way). It is unknown if this is the original state of the data, or it has been preprocessed before this stage, because we were unable to access the original.
[The dataset creation information is located here via Dzikovska](https://aclanthology.org/S13-2045.pdf)
The 5-way labels were accessed from [this public Github repository](https://github.com/ashudeep/Student-Response-Analysis). The required data is contained at:
- Training: https://raw.githubusercontent.com/ashudeep/Student-Response-Analysis/master/semevalFormatProcessing-5way/trainingGold.txt
- Test (Unseen Answer): https://raw.githubusercontent.com/ashudeep/Student-Response-Analysis/master/semevalFormatProcessing-5way/testGold-UA.txt
- Test (Unseen Question): https://raw.githubusercontent.com/ashudeep/Student-Response-Analysis/master/semevalFormatProcessing-5way/testGold-UQ.txt
These labels are joined to the Kaggle data using the answer id. At this stage, we only have the 5-way classifications for the _BEETLE_ subset - for _SCIENTSBANK_ we unfortunately only have the less granular 2 and 3 way classifications.
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
Annotations have already been retrieved.
#### Who are the annotators?
Annotations have already been retrieved. Annotators came from the _BEETLE_ and _SCIENTSBANK_ corpora.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
The _BEETLE_ corpus contains multiple reference answers of differing quality for each question, while _SCIENTSBANK_ contains only one. This means when joining each student answer to each reference answer, there are more _BEETLE_ rows generated (because every student answer is duplicated for each reference answer). This can be remedied by filtering to only include 'BEST' _BEETLE_ reference answers, although if multiple BEST answers are provided for a question (which does happen), _BEETLE_ may still be overrepresented.
### Other Known Limitations
## Additional Information
[This repository](https://github.com/ashudeep/Student-Response-Analysis) appears to provide preprocessing scripts for this dataset. It also may contain the original 5-way labels, which could be helpful for us if we want to draw our own classification boundaries.
### Dataset Curators
### Licensing Information
### Citation Information
Dzikovska MO, Nielsen R, Brew C, Leacock C, Giampiccolo D, Bentivogli L, Clark P, Dagan I, Dang HT (2013b) Semeval-2013 task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. In: Proceedings of the 6th International Workshop on Semantic Evaluation (SEMEVAL-2013), Association for Computational Linguistics, Atlanta, Georgia, USA
### Contributions |
LanceaKing/asvspoof2019 | LanceaKing | 2022-11-11T08:41:54Z | 113 | 2 | [
"task_categories:audio-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:extended|vctk",
"language:en",
"license:odc-by",
"size_categories:100K<n<1M",
"arxiv:1911.01601",
"region:us",
"voice-anti-spoofing"
] | [
"audio-classification"
] | 2022-07-20T08:29:40Z | 1 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- odc-by
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|vctk
task_categories:
- audio-classification
task_ids: []
pretty_name: asvspoof2019
tags:
- voice-anti-spoofing
---
# Dataset Card for asvspoof2019
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://datashare.ed.ac.uk/handle/10283/3336
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.01601
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a database used for the Third Automatic Speaker Verification Spoofing
and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)
organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor
Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,
and Andreas Nautsch in 2019.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
```
{'speaker_id': 'LA_0091',
'audio_file_name': 'LA_T_8529430',
'audio': {'path': 'D:/Users/80304531/.cache/huggingface/datasets/downloads/extracted/8cabb6d5c283b0ed94b2219a8d459fea8e972ce098ef14d8e5a97b181f850502/LA/ASVspoof2019_LA_train/flac/LA_T_8529430.flac',
'array': array([-0.00201416, -0.00234985, -0.0022583 , ..., 0.01309204,
0.01339722, 0.01461792], dtype=float32),
'sampling_rate': 16000},
'system_id': 'A01',
'key': 1}
```
### Data Fields
Logical access (LA):
- `speaker_id`: `LA_****`, a 4-digit speaker ID
- `audio_file_name`: name of the audio file
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `system_id`: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-')
- `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
Physical access (PA):
- `speaker_id`: `PA_****`, a 4-digit speaker ID
- `audio_file_name`: name of the audio file
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `environment_id`: a triplet (S,R,D_s), which take one letter in the set {a,b,c} as categorical value, defined as
| | a | b | c |
| -------------------------------- | ------ | ------- | -------- |
| S: Room size (square meters) | 2-5 | 5-10 | 10-20 |
| R: T60 (ms) | 50-200 | 200-600 | 600-1000 |
| D_s: Talker-to-ASV distance (cm) | 10-50 | 50-100 | 100-150 |
- `attack_id`: a duple (D_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as
| | A | B | C |
| ----------------------------------- | ------- | ------ | ----- |
| Z: Attacker-to-talker distance (cm) | 10-50 | 50-100 | > 100 |
| Q: Replay device quality | perfect | high | low |
for bonafide speech, `attack_id` is left blank ('-')
- `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
### Data Splits
| | Training set | Development set | Evaluation set |
| -------- | ------------ | --------------- | -------------- |
| Bonafide | 2580 | 2548 | 7355 |
| Spoof | 22800 | 22296 | 63882 |
| Total | 25380 | 24844 | 71237 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/
### Citation Information
```
@InProceedings{Todisco2019,
Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},
Author = {Todisco, Massimiliano and
Wang, Xin and
Sahidullah, Md and
Delgado, H ́ector and
Nautsch, Andreas and
Yamagishi, Junichi and
Evans, Nicholas and
Kinnunen, Tomi and
Lee, Kong Aik},
booktitle = {Proc. of Interspeech 2019},
Year = {2019}
}
```
|
wikimedia/wit_base | wikimedia | 2022-11-04T15:09:33Z | 3,618 | 60 | [
"task_categories:image-to-text",
"task_categories:text-retrieval",
"task_ids:image-captioning",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"source_datasets:extended|wikipedia",
"language:af",
"language:an",
"language:ar",
"language:arz",
"language:ast",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fil",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:hi",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:io",
"language:is",
"language:it",
"language:iw",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:kn",
"language:ko",
"language:la",
"language:lah",
"language:lb",
"language:lmo",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:my",
"language:nan",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:nv",
"language:oc",
"language:pa",
"language:pl",
"language:pt",
"language:qu",
"language:ro",
"language:ru",
"language:sco",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:tt",
"language:uk",
"language:ur",
"language:uz",
"language:vec",
"language:vi",
"language:vo",
"language:war",
"language:xmf",
"language:yue",
"language:zh",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2103.01913",
"arxiv:1512.03385",
"arxiv:1905.00641",
"region:us",
"text-image-retrieval"
] | [
"image-to-text",
"text-retrieval"
] | 2022-05-02T16:08:58Z | 1 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- af
- an
- ar
- arz
- ast
- az
- azb
- ba
- bar
- be
- bg
- bn
- br
- bs
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gl
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- io
- is
- it
- iw
- ja
- jv
- ka
- kk
- kn
- ko
- la
- lah
- lb
- lmo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- nan
- nds
- ne
- nl
- nn
- 'no'
- nv
- oc
- pa
- pl
- pt
- qu
- ro
- ru
- sco
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tr
- tt
- uk
- ur
- uz
- vec
- vi
- vo
- war
- xmf
- yue
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
- extended|wikipedia
task_categories:
- image-to-text
- text-retrieval
task_ids:
- image-captioning
paperswithcode_id: wit
pretty_name: Wikipedia-based Image Text
language_bcp47:
- af
- an
- ar
- arz
- ast
- az
- azb
- ba
- bar
- be
- be-tarask
- bg
- bn
- br
- bs
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gl
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- io
- is
- it
- iw
- ja
- jv
- ka
- kk
- kn
- ko
- la
- lah
- lb
- lmo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- nan
- nds
- ne
- nl
- nn
- 'no'
- nv
- oc
- pa
- pl
- pt
- qu
- ro
- ru
- sco
- si
- sk
- sl
- sq
- sr
- sr-Latn
- sv
- sw
- ta
- te
- tg
- th
- tr
- tt
- uk
- ur
- uz
- vec
- vi
- vo
- war
- xmf
- yue
- zh
- zh-TW
tags:
- text-image-retrieval
---
# Dataset Card for WIT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)
- **Leaderboard:** [WIT leaderboard](https://paperswithcode.com/sota/text-image-retrieval-on-wit) and [WIT Kaggle competition](https://www.kaggle.com/competitions/wikipedia-image-caption/leaderboard)
- **Point of Contact:** [Miriam Redi](mailto:[email protected])
### Dataset Summary
Wikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.
>
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images. However, due to licensing and data volume issues, the Google dataset only provides the image name and corresponding URL for download and not the raw image files.
>
> Getting easy access to the image files is crucial for participants to successfully develop competitive models. Therefore, today, the Wikimedia Research team is releasing its first large image dataset. It contains more than six million image files from Wikipedia articles in 100+ languages, which correspond to almost [1] all captioned images in the WIT dataset. Image files are provided at a 300-px resolution, a size that is suitable for most of the learning frameworks used to classify and analyze images.
> [1] We are publishing all images having a non-null “reference description” in the WIT dataset. For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the RetinaFace detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are candidate for deletion on Commons from the dataset.
**Note**: Compared to [Google's version](https://huggingface.co/datasets/google/wit), which has contents of one Wikipedia page per data sample, this version groups contents of all Wikipedia pages available in different languages for the image in one single data sample to avoid duplication of image bytes.
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- `text-retrieval`: The goal in this task is to build a model that retrieves the text (`caption_title_and_reference_description`) closest to an image. The leaderboard for this task can be found [here](https://paperswithcode.com/sota/text-image-retrieval-on-wit). This task also has a competition on [Kaggle](https://www.kaggle.com/c/wikipedia-image-caption).
In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages.
## Dataset Structure
### Data Instances
Each instance is an image, its representation in bytes, a pre-computed embedding, and the set of captions attached to the image in Wikipedia.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x225 at 0x7F88F3876358>,
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/8/8b/Scolopendra_gigantea.jpg',
'embedding': [1.4784087, 2.8710432, 0.0, 0.51603067, ..., 10.266883, 0.51142216, 0.0, 2.3464653],
'metadata_url': 'http://commons.wikimedia.org/wiki/File:Scolopendra_gigantea.jpg',
'original_height': 3000,
'original_width': 4000,
'mime_type': 'image/jpeg',
'caption_attribution_description': 'English: Puerto Rican Giant Centipede, Scolopendra gigantea; Vieques, Puerto Rico Slovenčina: Stonožka obrovská, Scolopendra gigantea; Vieques, Portoriko',
'wit_features': {
'language': ['ro', 'vi', 'sk', ..., 'nl', 'th', 'lv'],
'page_url': ['https://ro.wikipedia.org/wiki/Scolopendra_gigantea', 'https://vi.wikipedia.org/wiki/Scolopendra_gigantea', 'https://sk.wikipedia.org/wiki/Scolopendra_gigantea', ..., 'https://nl.wikipedia.org/wiki/Scolopendra_gigantea', 'https://th.wikipedia.org/wiki/%E0%B8%95%E0%B8%B0%E0%B8%82%E0%B8%B2%E0%B8%9A%E0%B8%A2%E0%B8%B1%E0%B8%81%E0%B8%A9%E0%B9%8C%E0%B8%82%E0%B8%B2%E0%B9%80%E0%B8%AB%E0%B8%A5%E0%B8%B7%E0%B8%AD%E0%B8%87%E0%B9%80%E0%B8%9B%E0%B8%A3%E0%B8%B9', 'https://lv.wikipedia.org/wiki/Skolopendru_dzimta'],
'attribution_passes_lang_id': [True, True, True, ..., True, True, True],
'caption_alt_text_description': [None, None, None, ..., 'Scolopendra gigantea', None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_reference_description': [None, None, None, ..., None, None, 'Milzu skolopendra (Scolopendra gigantea)'],
'caption_title_and_reference_description': [None, 'Scolopendra gigantea [SEP] ', None, ..., 'Scolopendra gigantea [SEP] ', None, 'Skolopendru dzimta [SEP] Milzu skolopendra (Scolopendra gigantea)'],
'context_page_description': ['Scolopendra gigantea este un miriapod din clasa Chilopoda, fiind cel mai mare reprezentant al genului Scolopendra. Adultul poate atinge o lungime de 26 cm, uneori depășind 30 cm. Această specie habitează în regiunile de nord și de vest a Americii de Sud, pe insulele Trinidad, insulele Virgine, Jamaica Hispaniola ș.a. Localnicii denumesc scolopendra chilopodul gigant galben și chilopodul gigant amazonian.', 'Scolopendra gigantea là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26 cm và có thể vượt quá 30 cm. Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', 'Scolopendra gigantea, starší slovenský nazov: štípavica veľká, je živočích z rodu Scolopendra, s veľkosťou do 30 cm.', ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', 'ตะขาบยักษ์ขาเหลืองเปรู หรือ ตะขาบยักษ์อเมซอน เป็นตะขาบชนิดที่มีขนาดใหญ่ที่สุดในสกุล Scolopendra โดยปกติเมื่อโตเต็มที่จะยาว 26 เซนติเมตร แต่บางครั้งก็สามารถโตได้ถึง 30 เซนติเมตร ตะขาบชนิดนี้อาศัยอยู่ทางแถบเหนือและตะวันตกของทวีปอเมริกาใต้ และตามเกาะแก่งของประเทศตรินิแดดและจาไมกา เป็นสัตว์กินเนื้อ โดยกินจิ้งจก, กบ, นก, หนู และแม้แต่ค้างคาวเป็นอาหาร และขึ้นชื่อในเรื่องความดุร้าย', 'Skolpendru dzimta pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'context_section_description': [None, 'Scolopendra gigantea (còn được gọi là Rết chân vàng khổng lồ Peru và Rết khổng lồ Amazon) là đại diện lớn nhất của chi Scolopendra nói riêng và cả lớp rết nói chung, thường đạt độ dài 26\xa0cm (10\xa0in) và có thể vượt quá 30\xa0cm (12\xa0in). Sinh sống ở khu vực phía bắc và tây của Nam Mỹ và các đảo Trinidad, Puerto Rico, Saint Thomas, U.S. Virgin Islands, Jamaica, và Hispaniola.', None, ..., 'Scolopendra gigantea is een tijgerduizendpoot uit Zuid-Amerika. De soort jaagt onder andere op grote geleedpotigen, amfibieën, reptielen en kleine zoogdieren. Het is voor zover bekend de grootste niet uitgestorven duizendpoot ter wereld.', None, 'Skolpendru dzimta (Scolopendridae) pieder pie simtkāju kārtas. Ap 400 dzimtas sugas sastopamas visā pasaulē, īpaši subtropu un tropu apgabalos. Mitinās augsnē, nobirušās lapās, plaisās, spraugās.'],
'hierarchical_section_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'is_main_image': [True, True, True, ..., True, True, True],
'page_title': ['Scolopendra gigantea', 'Scolopendra gigantea', 'Scolopendra gigantea', ..., 'Scolopendra gigantea', 'ตะขาบยักษ์ขาเหลืองเปรู', 'Skolopendru dzimta'],
'section_title': [None, None, None, ..., None, None, None]
}
}
```
**Note**: The dataset is stored in Parquet for better performance. This dataset was generated from the original files using [this script](wit_base/blob/main/scripts/wit.py). Additionally, 120 examples from the original files have incorrectly formatted one or more of the following fields: `original_height`, `original_width`, `mime_type` and `caption_attribution_description`. The fixed versions of these examples that were used in the generation script can be found [here](wit_base/blob/main/scripts/corrected_examples.py).
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image resized to a width of 300-px while preserving its aspect ratio. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `image_url`: URL to wikipedia image
- `embedding`: Precomputed image embedding. Each image is described with a 2048-dimensional signature extracted from the second-to-last layer of a [ResNet-50](https://arxiv.org/abs/1512.03385) neural network trained with [Imagenet](https://www.image-net.org/) data. These embeddings contain rich information about the image content and layout, in a compact form
- `metadata_url`: URL to wikimedia page containing the image and the metadata
- `original_height`: Original image height before resizing
- `original_width`: Original image width before resizing
- `mime_type`: Mime type associated to the image
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias.
- `wit_features`: Sequence of captions for the image with language, page URL, information about the page, caption text, etc.
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `caption_reference_description`: This is the caption that is visible on the wikipedia page directly below the image.
- `caption_title_and_reference_description`: Concatenation of `page_title` and `caption_reference_description`.
- `context_page_description`: Corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section
- `hierarchical_section_title`: Hierarchical section's title
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `page_changed_recently`: [More Information Needed]
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
<p align='center'>
<img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br>
<b>Figure: WIT annotation example. </b>
</p>
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 6477255 examples.
## Dataset Creation
### Curation Rationale
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/):
> The WIT dataset offers extremely valuable data about the pieces of text associated with Wikipedia images.
> Getting easy access to the image files is crucial for participants to successfully develop competitive models.
> With this large release of visual data, we aim to help the competition participants—as well as researchers and practitioners who are interested in working with Wikipedia images—find and download the large number of image files associated with the challenge, in a compact form.
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ~124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
the image, "Maybe" if it is sufficiently explanatory and "No" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
From the [official blog post](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/#FN1):
> For privacy reasons, we are not publishing images where a person is the primary subject, i.e., where a person’s face covers more than 10% of the image surface. To identify faces and their bounding boxes, we use the [RetinaFace](https://arxiv.org/abs/1905.00641) detector. In addition, to avoid the inclusion of inappropriate images or images that violate copyright constraints, we have removed all images that are [candidate for deletion](https://commons.wikimedia.org/wiki/Commons:Deletion_requests) on Commons from the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miriam Redi, Fabian Kaelin and Tiziano Piccardi.
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw), [yjernite](https://github.com/yjernite) and [mariosasko](https://github.com/mariosasko) for adding this dataset. |
gfissore/arxiv-abstracts-2021 | gfissore | 2022-10-27T17:08:00Z | 716 | 34 | [
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:explanation-generation",
"task_ids:text-simplification",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1905.00075",
"region:us"
] | [
"summarization",
"text-retrieval",
"text2text-generation"
] | 2022-03-02T23:29:22Z | 2 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: arxiv-abstracts-2021
size_categories:
- 1M<n<10M
source_datasets: []
task_categories:
- summarization
- text-retrieval
- text2text-generation
task_ids:
- explanation-generation
- text-simplification
- document-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
---
# Dataset Card for arxiv-abstracts-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Clement et al., 2019, On the Use of ArXiv as a Dataset, https://arxiv.org/abs/1905.00075](https://arxiv.org/abs/1905.00075)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Giancarlo Fissore](mailto:[email protected])
### Dataset Summary
A dataset of metadata including title and abstract for all arXiv articles up to the end of 2021 (~2 million papers).
Possible applications include trend analysis, paper recommender engines, category prediction, knowledge graph construction and semantic search interfaces.
In contrast to [arxiv_dataset](https://huggingface.co/datasets/arxiv_dataset), this dataset doesn't include papers submitted to arXiv after 2021 and it doesn't require any external download.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{
"id": "1706.03762",
"submitter": "Ashish Vaswani",
"authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\n Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin",
"title": "Attention Is All You Need",
"comments": "15 pages, 5 figures",
"journal-ref": null,
"doi": null,
"abstract": " The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural
networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through
an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention
mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show
these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to
train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing
best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model
establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small
fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well
to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training
data.\n",
"report-no": null,
"categories": [
"cs.CL cs.LG"
],
"versions": [
"v1",
"v2",
"v3",
"v4",
"v5"
]
}
```
### Data Fields
These fields are detailed on the [arXiv](https://arxiv.org/help/prep):
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system
### Data Splits
No splits
## Dataset Creation
### Curation Rationale
For about 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming, depth. In these times of unique global challenges, efficient extraction of insights from data is essential. The `arxiv-abstracts-2021` dataset aims at making the arXiv more easily accessible for machine learning applications, by providing important metadata (including title and abstract) for ~2 million papers.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The language producers are members of the scientific community at large, but not necessarily affiliated to any institution.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The full names of the papers' authors are included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by [ArXiv](https://arxiv.org/)
### Licensing Information
The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
huggingartists/taylor-swift | huggingartists | 2022-10-25T09:46:05Z | 40 | 3 | [
"language:en",
"size_categories:n<1K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"huggingartists",
"lyrics"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/taylor-swift"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 1.469581 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/3c1f124fcbbc2857a95e513fb34cc5a8.400x400x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/taylor-swift">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Taylor Swift</div>
<a href="https://genius.com/artists/taylor-swift">
<div style="text-align: center; font-size: 14px;">@taylor-swift</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/taylor-swift).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/taylor-swift")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|762| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/taylor-swift")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
carblacac/twitter-sentiment-analysis | carblacac | 2022-10-25T05:42:06Z | 291 | 22 | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"text-classification"
] | 2022-06-05T15:25:44Z | 1 | ---
pretty_name: "TSATC: Twitter Sentiment Analysis Training Corpus"
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- feeling-classification
paperswithcode_id: other
configs:
- None
---
# Dataset Card for TSATC: Twitter Sentiment Analysis Training Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data)
- **Repository:** [TSATC](https://github.com/cblancac/SentimentAnalysisBert/blob/main/data)
- **Paper:** [TSATC: Twitter Sentiment Analysis Training Corpus](http://thinknook.com/twitter-sentiment-analysis-training-corpus-dataset-2012-09-22/)
- **Point of Contact:** [Carlos Blanco]([email protected])
### Dataset Summary
TSATC: Twitter Sentiment Analysis Training Corpus
The original Twitter Sentiment Analysis Dataset contains 1,578,627 classified tweets, each row is marked as 1 for positive sentiment and 0 for negative sentiment. It can be downloaded from http://thinknook.com/wp-content/uploads/2012/09/Sentiment-Analysis-Dataset.zip.
The dataset is based on data from the following two sources:
University of Michigan Sentiment Analysis competition on Kaggle
Twitter Sentiment Corpus by Niek Sanders
This dataset has been transformed, selecting in a random way a subset of them, applying a cleaning process, and dividing them between the test and train subsets, keeping a balance between the number of positive and negative tweets within each of these subsets. These two files can be founded on https://github.com/cblancac/SentimentAnalysisBert/blob/main/data.
Finally, the train subset has been divided in two smallest datasets, train (80%) and validation (20%). The final dataset has been created with these two new subdatasets plus the previous test dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
Below are two examples from the dataset:
| | Text | Feeling |
| :-- | :---------------------------- | :------ |
| (1) | blaaah. I don't feel good aagain. | 0 |
| (2) | My birthday is coming June 3. | 1 |
### Data Fields
In the final dataset, all files are in the JSON format with f columns:
| Column Name | Data |
| :------------ | :-------------------------- |
| text | A sentence (or tweet) |
| feeling | The feeling of the sentence |
Each feeling has two possible values: `0` indicates the sentence has a negative sentiment, while `1` indicates a positive feeling.
### Data Splits
The number of examples and the proportion sentiments are shown below:
| Data | Train | Validation | Test |
| :------------------ | ------: | ------------: | ----: |
| Size | 119.988 | 29.997 | 61.998 |
| Labeled positive | 60.019 | 14.947 | 31029 |
| Labeled negative | 59.969 | 15.050 | 30969 |
## Dataset Creation
### Curation Rationale
Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like *flights from New York to Florida* and *flights from Florida to New York*.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Mentioned above.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Citation Information
```
@InProceedings{paws2019naacl,
title = {{TSATC: Twitter Sentiment Analysis Training Corpus}},
author = {Ibrahim Naji},
booktitle = {thinknook},
year = {2012}
}
```
### Contributions
Thanks to myself [@carblacac](https://github.com/cblancac/) for adding this transformed dataset from the original one. |
qanastek/ELRC-Medical-V2 | qanastek | 2022-10-24T17:15:17Z | 3,572 | 15 | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:extended",
"language:en",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
- bg
- cs
- da
- de
- el
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: ELRC-Medical-V2
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# ELRC-Medical-V2 : European parallel corpus for healthcare machine translation
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://live.european-language-grid.eu/catalogue/project/2209
- **Repository:** https://github.com/qanastek/ELRC-Medical-V2/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:[email protected])
### Dataset Summary
`ELRC-Medical-V2` is a parallel corpus for neural machine translation funded by the [European Commission](http://www.lr-coordination.eu/) and coordinated by the [German Research Center for Artificial Intelligence](https://www.dfki.de/web).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for 23 differents languages from the European Union (EU) with as source language in each cases english (EN).
**List of languages :** `Bulgarian (bg)`,`Czech (cs)`,`Danish (da)`,`German (de)`,`Greek (el)`,`Spanish (es)`,`Estonian (et)`,`Finnish (fi)`,`French (fr)`,`Irish (ga)`,`Croatian (hr)`,`Hungarian (hu)`,`Italian (it)`,`Lithuanian (lt)`,`Latvian (lv)`,`Maltese (mt)`,`Dutch (nl)`,`Polish (pl)`,`Portuguese (pt)`,`Romanian (ro)`,`Slovak (sk)`,`Slovenian (sl)`,`Swedish (sv)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
NAME = "qanastek/ELRC-Medical-V2"
dataset = load_dataset(NAME, use_auth_token=True)
print(dataset)
dataset_train = load_dataset(NAME, "en-es", split='train[:90%]')
dataset_test = load_dataset(NAME, "en-es", split='train[10%:]')
print(dataset_train)
print(dataset_train[0])
print(dataset_test)
```
## Dataset Structure
### Data Instances
```plain
id,lang,source_text,target_text
1,en-bg,"TOC \o ""1-3"" \h \z \u Introduction 3","TOC \o ""1-3"" \h \z \u Въведение 3"
2,en-bg,The international humanitarian law and its principles are often not respected.,Международното хуманитарно право и неговите принципи често не се зачитат.
3,en-bg,"At policy level, progress was made on several important initiatives.",На равнище политики напредък е постигнат по няколко важни инициативи.
```
### Data Fields
**id** : The document identifier of type `Integer`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
| Lang | # Docs | Avg. # Source Tokens | Avg. # Target Tokens |
|--------|-----------|------------------------|------------------------|
| bg | 13 149 | 23 | 24 |
| cs | 13 160 | 23 | 21 |
| da | 13 242 | 23 | 22 |
| de | 13 291 | 23 | 22 |
| el | 13 091 | 23 | 26 |
| es | 13 195 | 23 | 28 |
| et | 13 016 | 23 | 17 |
| fi | 12 942 | 23 | 16 |
| fr | 13 149 | 23 | 28 |
| ga | 412 | 12 | 12 |
| hr | 12 836 | 23 | 21 |
| hu | 13 025 | 23 | 21 |
| it | 13 059 | 23 | 25 |
| lt | 12 580 | 23 | 18 |
| lv | 13 044 | 23 | 19 |
| mt | 3 093 | 16 | 14 |
| nl | 13 191 | 23 | 25 |
| pl | 12 761 | 23 | 22 |
| pt | 13 148 | 23 | 26 |
| ro | 13 163 | 23 | 25 |
| sk | 12 926 | 23 | 20 |
| sl | 13 208 | 23 | 21 |
| sv | 13 099 | 23 | 21 |
|||||
| Total | 277 780 | 22.21 | 21.47 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://elrc-share.eu/repository/search/?q=mfsp%3A87ef9e5e8ac411ea913100155d026706e19a1a9f908b463c944490c36ba2f454&page=3).
### Source Data
#### Initial Data Collection and Normalization
The acquisition of bilingual data (from multilingual websites), normalization, cleaning, deduplication and identification of parallel documents have been done by [ILSP-FC tool](http://nlp.ilsp.gr/redmine/projects/ilsp-fc/wiki/Introduction). [Maligna aligner](https://github.com/loomchild/maligna) was used for alignment of segments. Merging/filtering of segment pairs has also been applied.
#### Who are the source language producers?
Every data of this corpora as been uploaded by [Vassilis Papavassiliou](mailto:[email protected]) on [ELRC-Share](https://elrc-share.eu/repository/browse/bilingual-corpus-from-the-publications-office-of-the-eu-on-the-medical-domain-v2-en-fr/6b31b32e8ac411ea913100155d0267061547d9b3ec284584af19a2953baa8937/).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__ELRC-Medical-V2__: Labrak Yanis, Dufour Richard
__Bilingual corpus from the Publications Office of the EU on the medical domain v.2 (EN-XX) Corpus__: [Vassilis Papavassiliou](mailto:[email protected]) and [others](https://live.european-language-grid.eu/catalogue/project/2209).
### Licensing Information
<a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf"><img alt="Attribution 4.0 International (CC BY 4.0) License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf">Attribution 4.0 International (CC BY 4.0) License</a>.
### Citation Information
Please cite the following paper when using this model.
```latex
@inproceedings{losch-etal-2018-european,
title = European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,
author = {
L'osch, Andrea and
Mapelli, Valérie and
Piperidis, Stelios and
Vasiljevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef
},
booktitle = Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
month = may,
year = 2018,
address = Miyazaki, Japan,
publisher = European Language Resources Association (ELRA),
url = https://aclanthology.org/L18-1213,
}
```
|
HUPD/hupd | HUPD | 2022-10-24T15:47:30Z | 935 | 38 | [
"task_categories:fill-mask",
"task_categories:summarization",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2207.04043",
"region:us",
"patents"
] | [
"fill-mask",
"summarization",
"text-classification",
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
license:
- cc-by-sa-4.0
task_categories:
- fill-mask
- summarization
- text-classification
- token-classification
task_ids:
- masked-language-modeling
- multi-class-classification
- topic-classification
- named-entity-recognition
pretty_name: "HUPD"
tags:
- patents
---
# Dataset Card for The Harvard USPTO Patent Dataset (HUPD)

## Dataset Description
- **Homepage:** [https://patentdataset.org/](https://patentdataset.org/)
- **Repository:** [HUPD GitHub repository](https://github.com/suzgunmirac/hupd)
- **Paper:** [HUPD arXiv Submission](https://arxiv.org/abs/2207.04043)
- **Point of Contact:** Mirac Suzgun
### Dataset Summary
The Harvard USPTO Dataset (HUPD) is a large-scale, well-structured, and multi-purpose corpus of English-language utility patent applications filed to the United States Patent and Trademark Office (USPTO) between January 2004 and December 2018.
### Experiments and Tasks Considered in the Paper
- **Patent Acceptance Prediction**: Given a section of a patent application (in particular, the abstract, claims, or description), predict whether the application will be accepted by the USPTO.
- **Automated Subject (IPC/CPC) Classification**: Predict the primary IPC or CPC code of a patent application given (some subset of) the text of the application.
- **Language Modeling**: Masked/autoregressive language modeling on the claims and description sections of patent applications.
- **Abstractive Summarization**: Given the claims or claims section of a patent application, generate the abstract.
### Languages
The dataset contains English text only.
### Domain
Patents (intellectual property).
### Dataset Curators
The dataset was created by Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber.
## Dataset Structure
Each patent application is defined by a distinct JSON file, named after its application number, and includes information about
the application and publication numbers,
title,
decision status,
filing and publication dates,
primary and secondary classification codes,
inventor(s),
examiner,
attorney,
abstract,
claims,
background,
summary, and
full description of the proposed invention, among other fields. There are also supplementary variables, such as the small-entity indicator (which denotes whether the applicant is considered to be a small entity by the USPTO) and the foreign-filing indicator (which denotes whether the application was originally filed in a foreign country).
In total, there are 34 data fields for each application. A full list of data fields used in the dataset is listed in the next section.
### Data Instances
Each patent application in our patent dataset is defined by a distinct JSON file (e.g., ``8914308.json``), named after its unique application number. The format of the JSON files is as follows:
```python
{
"application_number": "...",
"publication_number": "...",
"title": "...",
"decision": "...",
"date_produced": "...",
"date_published": "...",
"main_cpc_label": "...",
"cpc_labels": ["...", "...", "..."],
"main_ipcr_label": "...",
"ipcr_labels": ["...", "...", "..."],
"patent_number": "...",
"filing_date": "...",
"patent_issue_date": "...",
"abandon_date": "...",
"uspc_class": "...",
"uspc_subclass": "...",
"examiner_id": "...",
"examiner_name_last": "...",
"examiner_name_first": "...",
"examiner_name_middle": "...",
"inventor_list": [
{
"inventor_name_last": "...",
"inventor_name_first": "...",
"inventor_city": "...",
"inventor_state": "...",
"inventor_country": "..."
}
],
"abstract": "...",
"claims": "...",
"background": "...",
"summary": "...",
"full_description": "..."
}
```
## Usage
### Loading the Dataset
#### Sample (January 2016 Subset)
The following command can be used to load the `sample` version of the dataset, which contains all the patent applications that were filed to the USPTO during the month of January in 2016. This small subset of the dataset can be used for debugging and exploration purposes.
```python
from datasets import load_dataset
dataset_dict = load_dataset('HUPD/hupd',
name='sample',
data_files="https://huggingface.co/datasets/HUPD/hupd/blob/main/hupd_metadata_2022-02-22.feather",
icpr_label=None,
train_filing_start_date='2016-01-01',
train_filing_end_date='2016-01-21',
val_filing_start_date='2016-01-22',
val_filing_end_date='2016-01-31',
)
```
#### Full Dataset
If you would like to use the **full** version of the dataset, please make sure that change the `name` field from `sample` to `all`, specify the training and validation start and end dates carefully, and set `force_extract` to be `True` (so that you would only untar the files that you are interested in and not squander your disk storage space). In the following example, for instance, we set the training set year range to be [2011, 2016] (inclusive) and the validation set year range to be 2017.
```python
from datasets import load_dataset
dataset_dict = load_dataset('HUPD/hupd',
name='all',
data_files="https://huggingface.co/datasets/HUPD/hupd/blob/main/hupd_metadata_2022-02-22.feather",
icpr_label=None,
force_extract=True,
train_filing_start_date='2011-01-01',
train_filing_end_date='2016-12-31',
val_filing_start_date='2017-01-01',
val_filing_end_date='2017-12-31',
)
```
### Google Colab Notebook
You can also use the following Google Colab notebooks to explore HUPD.
- [](https://colab.research.google.com/drive/1_ZsI7WFTsEO0iu_0g3BLTkIkOUqPzCET?usp=sharing)[ HUPD Examples: Loading the Dataset](https://colab.research.google.com/drive/1_ZsI7WFTsEO0iu_0g3BLTkIkOUqPzCET?usp=sharing)
- [](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)[ HUPD Examples: Loading HUPD By Using HuggingFace's Libraries](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)
- [](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)[ HUPD Examples: Using the HUPD DistilRoBERTa Model](https://colab.research.google.com/drive/11t69BWcAVXndQxAOCpKaGkKkEYJSfydT?usp=sharing)
- [](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)[ HUPD Examples: Using the HUPD T5-Small Summarization Model](https://colab.research.google.com/drive/1VkCtrRIryzev_ixDjmJcfJNK-q6Vx24y?usp=sharing)
## Dataset Creation
### Source Data
HUPD synthesizes multiple data sources from the USPTO: While the full patent application texts were obtained from the USPTO Bulk Data Storage System (Patent Application Data/XML Versions 4.0, 4.1, 4.2, 4.3, 4.4 ICE, as well as Version 1.5) as XML files, the bibliographic filing metadata were obtained from the USPTO Patent Examination Research Dataset (in February, 2021).
### Annotations
Beyond our patent decision label, for which construction details are provided in the paper, the dataset does not contain any human-written or computer-generated annotations beyond those produced by patent applicants or the USPTO.
### Data Shift
A major feature of HUPD is its structure, which allows it to demonstrate the evolution of concepts over time. As we illustrate in the paper, the criteria for patent acceptance evolve over time at different rates, depending on category. We believe this is an important feature of the dataset, not only because of the social scientific questions it raises, but also because it facilitates research on models that can accommodate concept shift in a real-world setting.
### Personal and Sensitive Information
The dataset contains information about the inventor(s) and examiner of each patent application. These details are, however, already in the public domain and available on the USPTO's Patent Application Information Retrieval (PAIR) system, as well as on Google Patents and PatentsView.
### Social Impact of the Dataset
The authors of the dataset hope that HUPD will have a positive social impact on the ML/NLP and Econ/IP communities. They discuss these considerations in more detail in [the paper](https://arxiv.org/abs/2207.04043).
### Impact on Underserved Communities and Discussion of Biases
The dataset contains patent applications in English, a language with heavy attention from the NLP community. However, innovation is spread across many languages, cultures, and communities that are not reflected in this dataset. HUPD is thus not representative of all kinds of innovation. Furthermore, patent applications require a fixed cost to draft and file and are not accessible to everyone. One goal of this dataset is to spur research that reduces the cost of drafting applications, potentially allowing for more people to seek intellectual property protection for their innovations.
### Discussion of Biases
Section 4 of [the HUPD paper](https://arxiv.org/abs/2207.04043) provides an examination of the dataset for potential biases. It shows, among other things, that female inventors are notably underrepresented in the U.S. patenting system, that small and micro entities (e.g., independent inventors, small companies, non-profit organizations) are less likely to have positive outcomes in patent obtaining than large entities (e.g., companies with more than 500 employees), and that patent filing and acceptance rates are not uniformly distributed across the US. Our empirical findings suggest that any study focusing on the acceptance prediction task, especially if it is using the inventor information or the small-entity indicator as part of the input, should be aware of the the potential biases present in the dataset and interpret their results carefully in light of those biases.
- Please refer to Section 4 and Section D for an in-depth discussion of potential biases embedded in the dataset.
### Licensing Information
HUPD is released under the CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International.
### Citation Information
```
@article{suzgun2022hupd,
title={The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications},
author={Suzgun, Mirac and Melas-Kyriazi, Luke and Sarkar, Suproteem K. and Kominers, Scott Duke and Shieber, Stuart M.},
year={2022},
publisher={arXiv preprint arXiv:2207.04043},
url={https://arxiv.org/abs/2207.04043},
``` |
GEM/cochrane-simplification | GEM | 2022-10-24T15:30:10Z | 179 | 5 | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: cochrane-simplification
---
# Dataset Card for GEM/cochrane-simplification
## Dataset Description
- **Homepage:** https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts
- **Repository:** https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts
- **Paper:** https://aclanthology.org/2021.naacl-main.395/
- **Leaderboard:** N/A
- **Point of Contact:** Ashwin Devaraj
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/cochrane-simplification).
### Dataset Summary
Cochrane is an English dataset for paragraph-level simplification of medical texts. Cochrane is a database of systematic reviews of clinical questions, many of which have summaries in plain English targeting readers without a university education. The dataset comprises about 4,500 of such pairs.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/cochrane-simplification')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/cochrane-simplification).
#### website
[Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts)
#### paper
[Link](https://aclanthology.org/2021.naacl-main.395/)
#### authors
Ashwin Devaraj (The University of Texas at Austin), Iain J. Marshall (King's College London), Byron C. Wallace (Northeastern University), Junyi Jessy Li (The University of Texas at Austin)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Link](https://aclanthology.org/2021.naacl-main.395/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{devaraj-etal-2021-paragraph,
title = "Paragraph-level Simplification of Medical Texts",
author = "Devaraj, Ashwin and
Marshall, Iain and
Wallace, Byron and
Li, Junyi Jessy",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.395",
doi = "10.18653/v1/2021.naacl-main.395",
pages = "4972--4984",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Ashwin Devaraj
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-4.0: Creative Commons Attribution 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The intended use of this dataset is to train models that simplify medical text at the paragraph level so that it may be more accessible to the lay reader.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Simplification
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
A model trained on this dataset can be used to simplify medical texts to make them more accessible to readers without medical expertise.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
The University of Texas at Austin, King's College London, Northeastern University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Ashwin Devaraj (The University of Texas at Austin), Iain J. Marshall (King's College London), Byron C. Wallace (Northeastern University), Junyi Jessy Li (The University of Texas at Austin)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
National Institutes of Health (NIH) grant R01-LM012086, National Science Foundation (NSF) grant IIS-1850153, Texas Advanced Computing Center (TACC) computational resources
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Ashwin Devaraj (The University of Texas at Austin)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `gem_id`: string, a unique identifier for the example
- `doi`: string, DOI identifier for the Cochrane review from which the example was generated
- `source`: string, an excerpt from an abstract of a Cochrane review
- `target`: string, an excerpt from the plain-language summary of a Cochrane review that roughly aligns with the source text
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"gem_id": "gem-cochrane-simplification-train-766",
"doi": "10.1002/14651858.CD002173.pub2",
"source": "Of 3500 titles retrieved from the literature, 24 papers reporting on 23 studies could be included in the review. The studies were published between 1970 and 1997 and together included 1026 participants. Most were cross-over studies. Few studies provided sufficient information to judge the concealment of allocation. Four studies provided results for the percentage of symptom-free days. Pooling the results did not reveal a statistically significant difference between sodium cromoglycate and placebo. For the other pooled outcomes, most of the symptom-related outcomes and bronchodilator use showed statistically significant results, but treatment effects were small. Considering the confidence intervals of the outcome measures, a clinically relevant effect of sodium cromoglycate cannot be excluded. The funnel plot showed an under-representation of small studies with negative results, suggesting publication bias. There is insufficient evidence to be sure about the efficacy of sodium cromoglycate over placebo. Publication bias is likely to have overestimated the beneficial effects of sodium cromoglycate as maintenance therapy in childhood asthma.",
"target": "In this review we aimed to determine whether there is evidence for the effectiveness of inhaled sodium cromoglycate as maintenance treatment in children with chronic asthma. Most of the studies were carried out in small groups of patients. Furthermore, we suspect that not all studies undertaken have been published. The results show that there is insufficient evidence to be sure about the beneficial effect of sodium cromoglycate compared to placebo. However, for several outcome measures the results favoured sodium cromoglycate."
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- `train`: 3568 examples
- `validation`: 411 examples
- `test`: 480 examples
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset is the first paragraph-level simplification dataset published (as prior work had primarily focused on simplifying individual sentences). Furthermore, this dataset is in the medical domain, which is an especially useful domain for text simplification.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`, `BLEU`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
SARI measures the quality of text simplification
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
The paper which introduced this dataset trained BART models (pretrained on XSum) with unlikelihood training to produce simplification models achieving maximum SARI and BLEU scores of 40 and 43 respectively.
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
yes/very likely
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
This dataset can be used to simplify medical texts that may otherwise be inaccessible to those without medical training.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset was generated from abstracts and plain-language summaries of medical literature reviews that were written by medical professionals and thus does was not generated by people representative of the entire English-speaking population.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The main limitation of this dataset is that the information alignment between the abstract and plain-language summary is often rough, so the plain-language summary may contain information that isn't found in the abstract. Furthermore, the plain-language targets often contain formulaic statements like "this evidence is current to [month][year]" not found in the abstracts. Another limitation is that some plain-language summaries do not simplify the technical abstracts very much and still contain medical jargon.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The main pitfall to look out for is errors in factuality. Simplification work so far has not placed a strong emphasis on the logical fidelity of model generations with the input text, and the paper introducing this dataset does not explore modeling techniques to combat this. These kinds of errors are especially pernicious in the medical domain, and the models introduced in the paper do occasionally alter entities like disease and medication names.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.