datasetId
large_stringlengths 6
107
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-05 16:11:44
| downloads
int64 0
4.28M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-05 14:04:15
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
bstee615/bigvul | bstee615 | 2023-08-31T03:02:50Z | 654 | 9 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-31T02:55:32Z | 2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: CVE ID
dtype: string
- name: CVE Page
dtype: string
- name: CWE ID
dtype: string
- name: codeLink
dtype: string
- name: commit_id
dtype: string
- name: commit_message
dtype: string
- name: func_after
dtype: string
- name: func_before
dtype: string
- name: lang
dtype: string
- name: project
dtype: string
- name: vul
dtype: int8
splits:
- name: train
num_bytes: 404950685.2579571
num_examples: 150908
- name: validation
num_bytes: 88684597.21877055
num_examples: 33049
- name: test
num_bytes: 88687280.64632414
num_examples: 33050
download_size: 252969708
dataset_size: 582322563.1230518
---
# Dataset Card for "bigvul"
Unofficial, not affiliated with the authors.
- **Paper:** https://doi.org/10.1145/3379597.3387501
- **Repository:** https://github.com/ZeoVan/MSR_20_Code_vulnerability_CSV_Dataset |
eduagarcia-temp/OSCAR-2301_meta | eduagarcia-temp | 2023-08-28T14:07:22Z | 16,986 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-27T20:24:54Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: categories
sequence: string
- name: dedup
struct:
- name: exact_norm
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: exact_hash_idx
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash_idx
dtype: int64
- name: harmful_pp
dtype: float64
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float64
- name: quality_warnings
sequence: string
- name: sentence_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float64
- name: tlsh
dtype: string
- name: warc_headers
struct:
- name: content-length
dtype: int64
- name: content-type
dtype: string
- name: warc-block-digest
dtype: string
- name: warc-date
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-record-id
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-type
dtype: string
splits:
- name: train
num_bytes: 127702717461
num_examples: 18031400
download_size: 40317121912
dataset_size: 127702717461
---
# Dataset Card for "OSCAR-2301_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlfoundations/datacomp_xlarge | mlfoundations | 2023-08-21T21:42:38Z | 328,151 | 12 | [
"license:cc-by-4.0",
"size_categories:10B<n<100B",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-05-22T21:49:34Z | null | ---
license: cc-by-4.0
---
## DataComp XLarge Pool
This repository contains metadata files for the xlarge pool of DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. |
duongttr/vi-dataset-for-pretrain | duongttr | 2023-08-02T09:38:30Z | 13,699 | 2 | [
"task_categories:text-generation",
"language:vi",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LM"
] | [
"text-generation"
] | 2023-08-02T08:20:06Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 77360702833
num_examples: 23891116
- name: validation
num_bytes: 4064634081
num_examples: 1257428
download_size: 2126869688
dataset_size: 81425336914
task_categories:
- text-generation
language:
- vi
size_categories:
- 10M<n<100M
tags:
- LM
---
# Dataset Card for "vi-dataset-for-pretrain"
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
The dataset consists of:
- [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
- [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
- [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
- [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
# Dataset info
| Splits | N.o examples | Size |
| --- | --- | --- |
| Train | 23,891,116 | 77.36 GB |
| Validation | 1,257,428 | 4.06 GB |
| **Total** | **25,148,544** | **81.43 GB** | |
mikex86/stackoverflow-posts | mikex86 | 2023-08-01T01:31:12Z | 6,151 | 53 | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:code",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"question-answering",
"text-generation",
"text2text-generation"
] | 2023-06-14T18:48:00Z | 3 | ---
license: other
language:
- code
- en
task_categories:
- question-answering
- text-generation
- text2text-generation
tags:
- code
viewer: true
pretty_name: StackOverflow Posts Markdown
size_categories:
- 10M<n<100M
---
# StackOverflow Posts Markdown

## Dataset Summary
This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as **Markdown text**.<br>
The dataset contains ~60 Million posts, totaling ~35GB in size and ~65 billion characters of text.<br>
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange).
## Dataset Structure
Each record corresponds to one post of a particular type.
Original ordering from the data dump is not exactly preserved due to parallelism in the script used to process the data dump.
The markdown content of each post is contained in the `Body` field. The license for a particular post is contained in the `ContentLicense` field.
### Data Fields
```typescript
{
Id: long,
PostTypeId: long, // 1=Question, 2=Answer, 3=Orphaned tag wiki, 4=Tag wiki excerpt, 5=Tag wiki, 6=Moderator nomination, 7=Wiki Placeholder, 8=Privilige Wiki
AcceptedAnswerId: long | null, // only present if PostTypeId=1
ParentId: long | null, // only present if PostTypeId=2
Score: long,
ViewCount: long | null,
Body: string | null,
Title: string | null,
ContentLicense: string | null,
FavoriteCount: long | null,
CreationDate: string | null,
LastActivityDate: string | null,
LastEditDate: string | null,
LastEditorUserId: long | null,
OwnerUserId: long | null,
Tags: array<string> | null
}
```
Also consider the [StackExchange Datadump Schema Documentation](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede), as all fields
have analogs in the original dump format.
## How to use?
```python
from datasets import load_dataset
# predownload full dataset
ds = load_dataset('mikex86/stackoverflow-posts', split='train')
# dataset streaming (will only download the data as needed)
ds = load_dataset('mikex86/stackoverflow-posts', split='train', streaming=True)
for sample in iter(ds): print(sample["Body"])
```
## How is the text stored?
The original Data Dump formats the "Body" field as HTML, using tags such as `<code>`, `<h1>`, `<ul>`, etc.
This HTML format has been converted to Markdown.
### Markdown format
For reference, [this post on StackOverflow](https://stackoverflow.com/questions/53253940/make-react-useeffect-hook-not-run-on-initial-render) is formatted as follows:
#### Title: Make React useEffect hook not run on initial render
```markdown
According to the docs:
> `componentDidUpdate()` is invoked immediately after updating occurs. This method is not called for the initial render.
We can use the new `useEffect()` hook to simulate `componentDidUpdate()`, but it seems like `useEffect()` is being ran after every render, even the first time. How do I get it to not run on initial render?
As you can see in the example below, `componentDidUpdateFunction` is printed during the initial render but `componentDidUpdateClass` was not printed during the initial render.
```
function ComponentDidUpdateFunction() {
const [count, setCount] = React.useState(0);
React.useEffect(() => {
console.log(""componentDidUpdateFunction"");
});
return (
<div>
<p>componentDidUpdateFunction: {count} times</p>
<button
onClick={() => {
setCount(count + 1);
}}
>
Click Me
</button>
</div>
);
}
```
rest of the post omitted for brevity
```
## Details on the HTML to Markdown conversion
Using Jsoup, the original Body field was converted into a Jsoup Document. The child **nodes** (has special meaning in context of Jsoup) of this document were recursively traversed in a depth-first order.
Jsoup defines `.text()` as follows:
> ... the normalized, combined text of this element and all its children. Whitespace is normalized and trimmed. For example, given HTML <code><p>Hello <b>there</b> now! </p><code>, p.text() returns "Hello there now!"
Jsoup defines a `Node` as follows:
> The base, abstract Node model. Elements, Documents, Comments etc are all Node instances.
Additionally the existence of the `TextNode` should be noted, which represents floating text inside an HTML document that is not itself an HTML element.
Thus this text tag `<p>Hello<code>World</code></p>` would have two Jsoup child nodes `TextNode(value="Hello")` and `Element(tag="code", value="World")`.
The value `field` of a `TextNode` contains the free standing text without any further treatment (no whitespace stripping, etc.)
### Traversing Rules
- When ecountering a html tag for which a rule exists, children are not further traversed, **unless explicitly stated otherwise**.
- When encountering an `<a>` tag, `[${element.text()}](${element.attr("href")})` is emitted.
- When encountering an `<h1>` tag, `\n# ${element.text()}\n\n` is emitted.
- When encountering an `<h2>` tag, `\n## ${element.text()}\n\n` is emitted.
- When encountering an `<h3>` tag, `\n### ${element.text()}\n\n` is emitted.
- When encountering an `<h4>` tag, `\n#### ${element.text()}\n\n` is emitted.
- When encountering an `<h5>` tag, `\n##### ${element.text()}\n\n` is emitted.
- When encountering an `<h6>` tag, `\n###### ${element.text()}\n\n` is emitted.
- When encountering a `<code>` tag, `` `${element.text()}` ``is emitted
- When encountering a `<pre>` tag and said element **has** a `<code>` child tag, `` ```\n${element.text()}`\n```\n`` is emitted.
- When encountering a `<pre>` tag and said element **does not** have a `<code>` child tag, **children are traversed further**.
- When encountering an `<li>` tag, `- ` is emitted and **children are traversed further**.
- When encountering a `<blockquote>` tag, `> ` is emitted and **children are traversed further**.
- When encountering an `<hr>` tag, `\n---\n\n` is emitted
- When encountering an `<img>` tag, `})` is emitted.
- When encountering a `<table>` tag
- `\n| ` is emitted
- For each element of `element.select("th")`
- `${element.text()} | ` is emitted
- After the loop `\n| ` is emitted
- For each element of `element.select("th")`
- For each character of the `th.text()`
- `-` is emitted
- After the loop over each character of th ` | ` is emitted
- `\n` is emitted
- For each element of `element.select("tr")` with more than one children of tag type `td`
- `| ` is emitted
- For each element of `element.select("td")`
- `${td.text()} | ` is emitted
- After the loop over `<td>` elements, `\n` is emitted
- After the loop over `<tr>` elements, `\n` is emitted
- When encountering a jsoup `TextNode`, `${node.attr(node.nodeName())}` (which is equivalent to accessing the private field `node.value`) is emitted. |
iamtarun/code_instructions_120k_alpaca | iamtarun | 2023-07-27T15:49:10Z | 127 | 49 | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"text-generation",
"question-answering",
"text2text-generation"
] | 2023-07-23T17:34:03Z | 3 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 154022159
num_examples: 121959
download_size: 72306808
dataset_size: 154022159
task_categories:
- text-generation
- question-answering
- text2text-generation
tags:
- code
size_categories:
- 100K<n<1M
---
# Dataset Card for code_instructions_120k_alpaca
This dataset is taken from [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k), which adds a prompt column in alpaca style. Refer to the original source [here](https://huggingface.co/datasets/sahil2801/code_instructions_120k). |
lmsys/mt_bench_human_judgments | lmsys | 2023-07-20T18:28:15Z | 1,061 | 132 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2306.05685",
"region:us"
] | [
"conversational",
"question-answering"
] | 2023-07-04T14:03:03Z | null | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: model_a
dtype: string
- name: model_b
dtype: string
- name: winner
dtype: string
- name: judge
dtype: string
- name: conversation_a
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation_b
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
splits:
- name: human
num_bytes: 15003469
num_examples: 3355
- name: gpt4_pair
num_bytes: 10679650
num_examples: 2400
download_size: 1388888
dataset_size: 25683119
license: cc-by-4.0
task_categories:
- conversational
- question-answering
language:
- en
size_categories:
- 1K<n<10K
---
## Content
This dataset contains 3.3K expert-level pairwise human preferences for model responses generated by 6 models in response to 80 MT-bench questions.
The 6 models are GPT-4, GPT-3.5, Claud-v1, Vicuna-13B, Alpaca-13B, and LLaMA-13B. The annotators are mostly graduate students with expertise in the topic areas of each of the questions. The details of data collection can be found in our [paper](https://arxiv.org/abs/2306.05685).
## Agreement Calculation
This Colab [notebook](https://colab.research.google.com/drive/1ctgygDRJhVGUJTQy8-bRZCl1WNcT8De6?usp=sharing) shows how to compute the agreement between humans and GPT-4 judge with the dataset. Our results show that humans and GPT-4 judge achieve over 80\% agreement, the same level of agreement between humans.
## Citation
```
@misc{zheng2023judging,
title={Judging LLM-as-a-judge with MT-Bench and Chatbot Arena},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric. P Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica},
year={2023},
eprint={2306.05685},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
y2lan/japan-law | y2lan | 2023-07-20T06:45:14Z | 172 | 18 | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:question-answering",
"language:ja",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"summarization",
"text-generation",
"question-answering"
] | 2023-07-20T06:26:25Z | 2 | ---
license: mit
task_categories:
- summarization
- text-generation
- question-answering
language:
- ja
size_categories:
- 1K<n<10K
---
# Japanese Laws
This dataset comprises 8.75K law records retrieved from the official Japanese government website [e-Gov](https://elaws.e-gov.go.jp/). Each entry furnishes comprehensive details about a particular law, encapsulating its number, title, unique ID, the date it came into effect, and its complete text.
To ensure the dataset's uniqueness, deduplication was executed based on the most recent effective version as of August 1, 2023.
A typical entry in this dataset is structured as follows:
```json
{
"num": "Law Number (e.g., Reiwa 5th Year Pollution Adjustment Committee Rule No. 1)",
"title": "Title of the Law",
"id": "Unique Identifier for the Law",
"date": "Date the Law Became Effective",
"body": "Full Text of the Law"
}
``` |
winvoker/turkish-sentiment-analysis-dataset | winvoker | 2023-07-19T13:15:13Z | 339 | 43 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:tr",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Turkish Sentiment Dataset
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset
This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like "Lorem ipsum dolor sit amet.".
There are 492.782 labeled sentences. %10 of them used for testing.
# Türkçe Duygu Analizi Veriseti
Bu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: "Lorem ipsum dolor sit amet.".
There are 492.782 labeled sentences. %10 of them used for testing.
# References
- https://www.kaggle.com/burhanbilenn/duygu-analizi-icin-urun-yorumlari
- https://github.com/fthbrmnby/turkish-text-data
- https://www.kaggle.com/mustfkeskin/turkish-wikipedia-dump
- https://github.com/ezgisubasi/turkish-tweets-sentiment-analysis
- http://humirapps.cs.hacettepe.edu.tr/
You can reach me via LinkedIn. https://www.linkedin.com/in/batuhanayhan/ |
danasone/librusec | danasone | 2023-07-13T08:59:22Z | 13,549 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-07-13T06:53:59Z | null | ---
dataset_info:
features:
- name: id
dtype: uint64
- name: text
dtype: string
splits:
- name: train
num_bytes: 119853827612
num_examples: 212795
download_size: 31530091183
dataset_size: 119853827612
---
# Dataset Card for "librusec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BAAI/COIG | BAAI | 2023-07-12T15:38:35Z | 445 | 438 | [
"language:zh",
"license:apache-2.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2204.07705",
"arxiv:2212.10560",
"arxiv:2212.09689",
"arxiv:2304.07987",
"region:us"
] | [] | 2023-04-16T11:09:32Z | null | ---
license: apache-2.0
arxiv: 2304.07987
language:
- zh
---
# This is the Chinese Open Instruction Generalist project
We propose the Chinese Open Instruction Generalist (**COIG**) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively.
It is best to download the individual data files directly that you wish to use instead of using HF load_datasets. All datasets can be downloaded from: https://huggingface.co/datasets/BAAI/COIG/tree/main
This dataset card is modified from [OIG](https://huggingface.co/datasets/laion/OIG).
### Translated Instructions (66,858)
There are 66,858 instructions in total, which are composed of 1,616 task descriptions in [Super-NaturalInstructions](https://arxiv.org/abs/2204.07705) along with a single instance for each of them, 175 seed tasks in [Self-Instruct](https://arxiv.org/abs/2212.10560), and 66,007 instructions from [Unnatural Instructions](https://arxiv.org/abs/2212.09689). To reduce the cost and further improve the quality of the instruction corpus, we separate the translation procedure into three phases: automatic translation, manual verification, and manual correction. These strict quality verification procedures assure the reliability of the translated corpus.
### Exam Instructions (63,532)
The Chinese National College Entrance Examination, Middle School Entrance Examinations, and Civil Servant Examination are the main Chinese commonsense tests. These exams contain various question formats and detailed analysis that can be used as the Chain-of-Thought (**CoT**) corpus. We extract six informative elements from original exam questions, including instruction, question context, question, answer, answer analysis, and coarse-grained subject. There are six main coarse-grained subjects: Chinese, English, Politics, Biology, History, and Geology. There are very few Math, Physics, and Chemistry questions in the corpus because these questions are often with complex symbols which are hard to annotate. For many choice questions, we recommend that the researchers utilize this corpus to further post-process it using prompts or post-process it to blank-filling questions to increase the instructions' diversity further.
### Human Value Alignment Instructions (34,471)
To respect and reflect the major difference caused by different cultural backgrounds, different from other tasks in COIG that leverage one unified collection of instruction-following samples, we categorize the value alignment data into two separate series:
- A set of samples that present shared human values in the Chinese-speaking world. In total, we choose 50 instructions as the augmentation seeds, and produce 3k resulting instructions following samples for general-purpose value alignment in the Chinese-speaking world.
- Some additional sets of samples that present regional-culture or country-specific human values.
### Counterfactural Correction Multi-round Chat (13,653)
The Counterfactual Correction Multi-round Chat dataset (CCMC) is constructed based on the [CN-DBpedia knowledge graph dataset](https://link.springer.com/chapter/10.1007/978-3-319-60045-1_44) with the aim of alleviating and resolving the pain points of hallucination and factual inconsistency in current LLMs. The CCMC dataset includes 5 rounds of role-playing chat between a student and a teacher, and the corresponding knowledge they refer to. The dataset contains ~13,000 dialogues with an average of 5 rounds per dialogue, resulting in ~65,000 rounds of chat.
### Leetcode Instructions (11,737)
Given that the code-related tasks potentially contribute to the ability emergence of LLMs, we argue that code-related tasks aligned with the Chinese natural language should be considered in our datasets. Therefore, we build the Leetcode instructions from a **CC-BY-SA-4.0** license [collection](https://github.com/doocs/leetcode) of 2,589 programming questions. The questions contain problem descriptions, multiple programming languages, and explanations (834 questions do not have explanations).
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/BAAI-Zlab/COIG
## Update: May 27, 2023
- v0.3: Update counterfactural_correction_multi_round_chat.tar.gz and make sure all round responses can be decoded as json.
- v0.2: Update exam_instructions.jsonl, translated_instructions.jsonl and human_value_alignment_instructions_part2.json.
- v0.1: Release the five datasets of COIG.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The COIG dataset that is authored by BAAI is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as unnatural instructions data which is licensed under MIT License, or web-crawled data which is used under fair use principles.
## BibTeX & Citation
```
@misc{zhang2023chinese,
title={Chinese Open Instruction Generalist: A Preliminary Release},
author={Ge Zhang and Yemin Shi and Ruibo Liu and Ruibin Yuan and Yizhi Li and Siwei Dong and Yu Shu and Zhaoqun Li and Zekun Wang and Chenghua Lin and Wenhao Huang and Jie Fu},
year={2023},
eprint={2304.07987},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
cerebras/SlimPajama-627B | cerebras | 2023-07-07T23:13:12Z | 33,679 | 461 | [
"task_categories:text-generation",
"language:en",
"arxiv:2306.01116",
"arxiv:2302.13971",
"region:us"
] | [
"text-generation"
] | 2023-06-07T18:45:02Z | null | ---
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama-627B
---
## Dataset Description
- **Homepage:** [SlimPajama Blog](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama)
- **Repository:** [Pre-Processing Libraries](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama)
- **Size of compressed dataset:** 895 GB
The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of [Together's RedPajama](https://github.com/togethercomputer/redpajama-data).
Check out our [blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) explaining our methods, [our code on GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama), and join the discussion on the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
## Getting Started
You can download the dataset using Hugging Face datasets:
```python
from datasets import load_dataset
ds = load_dataset("cerebras/SlimPajama-627B")
```
## Background
Today we are releasing SlimPajama – the largest extensively deduplicated, multi-corpora, open-source dataset for training large language models. SlimPajama was created by cleaning and deduplicating the 1.2T token RedPajama dataset from Together. By filtering out low quality data and duplicates, we were able to remove 49.6% of bytes, slimming down the dataset from 1210B to 627B tokens. We believe SlimPajama offers the highest quality and most compute efficient data to train on for runs up to 627B tokens. When upsampled, we expect SlimPajama to perform equal to or better than RedPajama-1T when training at trillion token scale.
In addition to the data, we are also releasing the tools we built to create SlimPajama. Applying [MinHashLSH](http://infolab.stanford.edu/~ullman/mmds/book0n.pdf) deduplication to trillion token datasets like RedPajama was not possible with off-the-shelf open-source code. We made several improvements to existing solutions to produce an infrastructure that can perform MinHashLSH deduplication on trillion token datasets in a distributed, multi-threaded, and memory efficient fashion. Today we are open-sourcing this infrastructure to enable the community to easily create higher quality, extensively deduplicated datasets in the future.
### Our contributions
1. SlimPajama 627B – the largest extensively deduplicated, multi-corpora, open dataset for LLM training. We release it under the Apache 2.0 license.
2. Releasing validation and test sets, 500M tokens each, which has been decontaminated against the training data.
3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open-source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale.
The full set of scripts to recreate the dataset from the original RedPajama dataset are available on the [Cerebras GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama). A deeper explanation of our cleaning and deduplication process can be found in the [SlimPajama blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama).
## Dataset Summary
The [latest research](https://arxiv.org/abs/2306.01116) has shown that data quality is as important as data quantity. While training on more than one data epoch can be beneficial, this should be a choice rather than a side-effect of duplicates in the dataset. We decided to extensively deduplicate RedPajama to produce a dataset with higher information density. This means when using SlimPajama, you can achieve higher accuracy with the same compute budget when compared to other datasets.
#### Comparison of dataset features
| Data source | Tokens | Open Source | Curated Data Sources | Deduplication Level |
| --------------- | ------- | ----------- | -------------------- | ------------------- |
| SlimPajama | **627B**| **Yes** | **Yes** | **Extensive** |
| RedPajama | 1.21T | **Yes** | **Yes** | Partial |
| RefinedWeb-600B | 600B | **Yes** | No | **Extensive** |
| RefinedWeb-5T | **5T** | No | No | **Extensive** |
| LLaMA | 1.4T | No | **Yes** | Partial |
| MPT | 1T | No | **Yes** | Partial |
| MassiveText | 1.4T | No | **Yes** | **Extensive** |
#### Document low-length filter rates
| Data source | Document low-length filter rate |
| ------------- | ------------------------------- |
| Commoncrawl | 0.02% |
| C4 | 4.70% |
| GitHub | 0.00% |
| Books | 0.00% |
| ArXiv | 0.62% |
| Wikpedia | 0.00% |
| StackExchange | 0.32% |
| Total | 1.86% |
#### Data source byte deduplication rates
| Data source | Byte deduplication rate |
| ------------- | ---------------------- |
| Commoncrawl | 63.76% |
| C4 | 6.85% |
| GitHub | 46.16% |
| Books | 2.01% |
| ArXiv | 0.06% |
| Wikipedia | 2.24% |
| StackExchange | 0.20% |
| Total | 49.60% |
#### Data source proportions for SlimPajama and RedPajama
| Data source | SlimPajama | RedPajama |
| ------------- | ---------- | --------- |
| Commoncrawl | 52.2% | 72.6% |
| C4 | 26.7% | 14.4% |
| GitHub | 5.2% | 4.9% |
| Books | 4.2% | 2.1% |
| ArXiv | 4.6% | 2.3% |
| Wikpedia | 3.8% | 2.0% |
| StackExchange | 3.3% | 1.7% |
### Languages
Primarily English, with some non-English files in Wikipedia.
### Dataset Structure
The dataset consists of jsonl files, with structure as follows:
```json
{
"text": ...,
"meta": {"redpajama_set_name": "RedPajamaCommonCrawl" | "RedPajamaC4" | "RedPajamaGithub" | "RedPajamaBook" | "RedPajamaArXiv" | "RedPajamaWikipedia" | "RedPajamaStackExchange"},
}
```
### Dataset Creation
SlimPajama was created by cleaning and deduplicating the [RedPajama dataset from Together](https://github.com/togethercomputer/redpajama-data) via MinHashLSH. RedPajama is an open-source reproduction of the [LLaMA](https://arxiv.org/abs/2302.13971) data collection methodology.
### Source Data
The data sources composing RedPajama are explained in [its model card](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
To cite SlimPajama, please use:
```
@misc{cerebras2023slimpajama,
author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
month = June,
year = 2023,
howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B},
}
```
## License
Please refer to the licenses of the data subsets you use.
- [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
- [C4 license](https://huggingface.co/datasets/allenai/c4#license)
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
- [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
- [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
- [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
## Acknowledgements
- We’d like to thank Together, Ontocord.ai, ETH DS3Lab , AAI CERC Lab for creating the original RedPajama dataset and releasing it open source.
- This release was made possible with the support and collaboration of Opentensor.
- Easy cloud access to Cerebras systems is provided by our partner Cirrascale. |
liuhaotian/LLaVA-Pretrain | liuhaotian | 2023-07-06T08:47:38Z | 1,900 | 176 | [
"language:en",
"license:other",
"modality:image",
"region:us"
] | [] | 2023-05-02T23:55:26Z | null | ---
license: other
language:
- en
pretty_name: LLaVA Pretrain
---
# LLaVA Visual Instruct Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct Pretrain LCS-558K is a subset of LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in May 2023.
**Dataset structure:**
- `blip_laion_cc_sbu_558k.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `blip_laion_cc_sbu_558k_meta.json` contains the meta data of the image file name, image URL, synthetic BLIP caption.
- `images.zip` contains all raw images of the filtered subset from LAION/CC/SBU. Important notice: Upon the request from the community, as ~15% images of the original LAION/CC/SBU dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the LAION/CC/SBU license. This may be taken down when requested by the original LAION/CC/SBU dataset owner or owners of the referenced images.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
EleutherAI/race | EleutherAI | 2023-07-03T21:27:18Z | 25,132 | 6 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1704.04683",
"region:us"
] | [
"multiple-choice"
] | 2023-07-03T13:20:38Z | null | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: RACE
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: race
dataset_info:
---
# "race" Grouped by Article
This is a modified version of https://huggingface.co/datasets/race that returns documents grouped by article context instead of by question. **Note:** This dataset currently only contains that test set of the ```high``` subset of the data.
The original readme is contained below.
# Dataset Card for "race"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
- **Repository:** https://github.com/qizhex/RACE_AR_baselines
- **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
- **Point of Contact:** [Guokun Lai](mailto:[email protected]), [Qizhe Xie](mailto:[email protected])
- **Size of downloaded dataset files:** 76.33 MB
- **Size of the generated dataset:** 349.46 MB
- **Total amount of disk used:** 425.80 MB
### Dataset Summary
RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 174.73 MB
- **Total amount of disk used:** 200.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### high
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 140.12 MB
- **Total amount of disk used:** 165.56 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### middle
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 34.61 MB
- **Total amount of disk used:** 60.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "B",
"article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
"example_id": "middle3.txt",
"options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
"question": "According to the passage, which of the following statements is TRUE?"
}
```
### Data Fields
The data fields are the same among all splits.
#### all
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### high
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### middle
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|------|----:|---------:|---:|
|all |87866| 4887|4934|
|high |62445| 3451|3498|
|middle|25421| 1436|1436|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
http://www.cs.cmu.edu/~glai1/data/race/
1. RACE dataset is available for non-commercial research purpose only.
2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
4. We reserve the right to terminate your access to the RACE dataset at any time.
### Citation Information
```
@inproceedings{lai-etal-2017-race,
title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
author = "Lai, Guokun and
Xie, Qizhe and
Liu, Hanxiao and
Yang, Yiming and
Hovy, Eduard",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1082",
doi = "10.18653/v1/D17-1082",
pages = "785--794",
}
```
### Contributions
Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
Jean-Baptiste/wikiner_fr | Jean-Baptiste | 2023-06-26T15:33:17Z | 110 | 7 | [
"task_categories:token-classification",
"language:fr",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
language:
- fr
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': LOC
'2': PER
'3': MISC
'4': ORG
splits:
- name: test
num_bytes: 5954708
num_examples: 13410
- name: train
num_bytes: 54305659
num_examples: 120682
download_size: 12147768
dataset_size: 60260367
train-eval-index:
- config: Jean-Baptiste--wikiner_fr
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
task_categories:
- token-classification
---
# Dataset Card for "wikiner_fr"
Dataset Description:
- **Homepage:** https://metatext.io/datasets/wikiner
- **Repository:**
- **Paper:** https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub
- **Leaderboard:**
- **Point of Contact:** |
jxie/coco_captions | jxie | 2023-06-25T07:37:53Z | 10,781 | 2 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-06-25T04:37:33Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: cocoid
dtype: int32
- name: caption
dtype: string
splits:
- name: train
num_bytes: 90684615607.036
num_examples: 566747
- name: validation
num_bytes: 4562095167.09
num_examples: 25010
- name: test
num_bytes: 4221845598.88
num_examples: 25010
download_size: 20920410197
dataset_size: 99468556373.006
---
# Dataset Card for "coco_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
camel-ai/math | camel-ai | 2023-06-22T21:59:52Z | 254 | 108 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"modality:text",
"arxiv:2303.17760",
"region:us",
"instruction-finetuning"
] | [
"text-generation"
] | 2023-04-10T22:00:46Z | null | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Math
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Math dataset is composed of 50K problem-solution pairs obtained using GPT-4. The dataset problem-solutions pairs generating from 25 math topics, 25 subtopics for each topic and 80 problems for each "topic,subtopic" pairs.
We provide the data in `math50k.zip`.
## Data Fields
**The data fields for files in `math50k.zip` are as follows:**
* `role_1`: assistant role
* `topic`: math topic
* `sub_topic`: math subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
Note: File naming refers to {`topic_index`}\_{`subtopic_index`}\_{`problem_number`}.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/math", repo_type="dataset", filename="math50k.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
|
opentensor/openvalidators-test | opentensor | 2023-06-20T14:21:16Z | 354,923 | 0 | [
"license:mit",
"size_categories:1M<n<10M",
"region:us"
] | [] | 2023-06-09T15:42:16Z | null | ---
license: mit
viewer: False
size_categories:
- 1M<n<10M
---
# Dataset Card for Openvalidators dataset
## Dataset Description
- **Repository:** https://github.com/opentensor/validators
- **Homepage:** https://bittensor.com/
### Dataset Summary
The OpenValidators dataset, created by the OpenTensor Foundation, is a continuously growing collection of data generated by the [OpenValidators](https://github.com/opentensor/validators) project in [W&B](https://wandb.ai/opentensor-dev/openvalidators/table). It contains hundreds of thousands of records and serves researchers, data scientists, and miners in the Bittensor network. The dataset provides information on network performance, node behaviors, and wandb run details. Researchers can gain insights and detect patterns, while data scientists can use it for training models and analysis. Miners can use the generated data to fine-tune their models and enhance their incentives in the network. The dataset's continuous updates support collaboration and innovation in decentralized computing.
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale.
The OpenValidators dataset gives you the granularity of extracting data by ************run_id************, by ************************************OpenValidators version************************************ and by ******************************************************************multiple OpenValidators versions.****************************************************************** The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
**Downloading by run id**
For example, to download the data for a specific run, simply specify the corresponding ********************************************OpenValidators version******************************************** and the ************************wandb run id************************ in the format `version/raw_data/run_id.parquet`:
```python
from datasets import load_dataset
version = '1.0.4' # OpenValidators version
run_id = '0plco3n0' # WandB run id
run_id_dataset = load_dataset('opentensor/openvalidators-test', data_files=f'{version}/raw_data/{run_id}.parquet')
```
_Please note that only completed run_ids are included in the dataset. Runs that are still in progress will be ingested shortly after they finish._
**Downloading by OpenValidators version**
One can also leverage the `datasets` library to download all the runs within a determined ****************************OpenValidators**************************** version. That can be useful for researchers and data enthusiasts that are looking to do analysis in a specific ****************************OpenValidators**************************** version state.
```python
from datasets import load_dataset
version = '1.0.4' # Openvalidators version
version_dataset = load_dataset('opentensor/openvalidators-test', data_files=f'{version}/raw_data/*')
```
**Downloading by multiple OpenValidators version**
Utilizing the `datasets` library, users can efficiently download runs from multiple **OpenValidators** versions. By accessing data from various OpenValidators versions, users can undertake downstream tasks such as data fine-tuning for mining or to perform big data analysis.
```python
from datasets import load_dataset
versions = ['1.0.0', '1.0.1', '1.0.2', '1.0.4'] # Desired versions for extraction
data_files = [f'{version}/raw_data/*' for version in versions] # Set data files directories
dataset = load_dataset('opentensor/openvalidators-test', data_files={ 'test': data_files })
```
**Analyzing metadata**
All the state related to the details of the wandb data ingestion can be accessed easily using pandas and hugging face datasets structure. This data contains relevant information regarding the metadata of the run, including user information, config information and ingestion state.
```python
import pandas as pd
version = '1.0.4' # OpenValidators version for metadata analysis
df = pd.read_csv(f'hf://datasets/opentensor/openvalidators-test/{version}/metadata.csv')
```
## Dataset Structure
### Data Instances
**versioned raw_data**
The data is provided as-in the wandb logs, without further preprocessing or tokenization. This data is located at `version/raw_data` where each file is a wandb run.
**metadata**
This dataset defines the current state of the wandb data ingestion by **run id**.
### Data Fields
**Raw data**
The versioned raw_data collected from W&B follows the following schema:
- `_runtime`: (float64) Runtime of the event
- `_step`: (int64) Step of the event
- `_timestamp`: (float64) Timestamp of the event
- `answer_completions`: (list(string)) Completions of the answer_prompt
- `answer_prompt`: (string) Prompt used to generate the answer
- `answer_rewards`: (list(float64)) Rewards of the answer responses
- `answer_times`: (list(float64)) Elapsed time of answer responses
- `answer_uids`: (list(int32)) UIDs of nodes that answered the answer_prompt
- `base_prompt`: (string) Bootstrap prompt
- `best_answer`: (string) Best answer response
- `best_followup`: (string) Best followup response
- `block`: (float64) Subtensor current block
- `followup_completions`: (list(string)) Completions of the base_prompt
- `followup_rewards`: (list(float64)) Rewards of the followup responses
- `followup_times`: (list(float64)) Ellapsed time of followup responses
- `followup_uids`: (list(int64)) UIDs of nodes that answered the base_prompt
- `gating_loss`: (float64) Gating model loss
- `gating_scorings`: (list(float64)) Gating model scores
- `moving_averaged_scores`: (list(float64)) Moving averaged scores at the time of the event
- `set_weights`: (list(list(float64))) Processed weights of nodes by uid
- `step_length`: (float64) Time difference from beginning of forward call to event logging
**Metadata**
- `run_id`: (string) Wandb Run Id
- `completed`: (boolean) Flag indicating if the run_id is completed (finished, crashed or killed)
- `downloaded`: (boolean) Flag indicating if the run_id data has been downloaded
- `last_checkpoint`: (string) Last checkpoint of the run_id
- `hotkey`: (string) Hotkey associated with the run_id
- `openvalidators_version`: (string) Version of OpenValidators associated with the run_id
- `problematic`: (boolean) Flag indicating if the run_id data had problems to be ingested
- `problematic_reason`: (string) Reason for the run_id being problematic (Exception message)
- `wandb_json_config`: (string) JSON configuration associated with the run_id in Wandb
- `wandb_run_name`: (string) Name of the Wandb run
- `wandb_user_info`: (string) Username information associated with the Wandb run
- `wandb_tags`: (list) List of tags associated with the Wandb run
- `wandb_createdAt`: (string) Timestamp of the run creation in Wandb
## Dataset Creation
### Curation Rationale
This dataset was curated to provide a comprehensive and reliable collection of historical data obtained by the execution of different OpenValidators in the bittensor network.
The goal is to support researchers, data scientists and developers with data generated in the network, facilitating the discovery of new insights, network analysis, troubleshooting, and data extraction for downstream tasks like mining.
### Source Data
#### Initial Data Collection and Normalization
The initial data collection process for this dataset involves recurrent collection by a specialized worker responsible for extracting data from wandb and ingesting it into the Hugging Face datasets structure. The collected data is organized based on the OpenValidators version and run ID to facilitate efficient data management and granular access. Each run is collected based on its corresponding OpenValidators version tag and grouped into version-specific folders. Within each version folder, a `metadata.csv` file is included to manage the collection state, while the raw data of each run is saved in the `.parquet` format with the file name corresponding to the run ID (e.g., `run_id.parquet`). Please note that the code for this data collection process will be released for transparency and reproducibility.
#### Who are the source language producers?
The language producers for this dataset are all the openvalidators that are logging their data into wandb in conjunction of other nodes of the bittensor network. The main wandb page where the data is sent can be accessed at https://wandb.ai/opentensor-dev/openvalidators/table.
### Licensing Information
The dataset is licensed under the [MIT License](https://github.com/opentensor/validators/blob/main/LICENSE)
### Supported Tasks and Leaderboards
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
JeanKaddour/minipile | JeanKaddour | 2023-06-20T10:08:26Z | 3,009 | 122 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2304.08442",
"arxiv:2201.07311",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2023-04-09T20:32:58Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5906108510
num_examples: 1000000
- name: validation
num_bytes: 2779386
num_examples: 500
- name: test
num_bytes: 58558191
num_examples: 10000
download_size: 3177432813
dataset_size: 5967446087
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license: other
multilinguality:
- monolingual
pretty_name: MiniPile
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: minipile
---
# Dataset Card for MiniPile
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
[The MiniPile Challenge for Data-Efficient Language Models](https://arxiv.org/abs/2304.08442)
### Dataset Summary
MiniPile is a 6GB subset of the [deduplicated The Pile corpus](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated). To curate MiniPile, we perform a simple, three-step data filtering process: we (1) infer embeddings for all documents of the Pile, (2) cluster the embedding space using k-means, and (3) filter out low-quality clusters.
The primary motivation for curating MiniPile is that (i) diverse pre-training datasets (like the Pile) are often too large for academic budgets and (ii) most smaller-scale datasets are fairly homogeneous and thereby unrepresentative of contemporary general-purpose language models. MiniPile aims to fill this gap and thereby facilitate data-efficient research on model architectures, training procedures, optimizers, etc.
More details on the MiniPile curation procedure and some pre-training results be found in the [MiniPile paper](https://arxiv.org/abs/2304.08442).
For more details on the Pile corpus, we refer the reader to [the Pile datasheet](https://arxiv.org/abs/2201.07311).
### Languages
English (`EN`)
## Additional Information
### Dataset Curators
MiniPile is a subset of the Pile, curated by Jean Kaddour. The Pile was created by Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy.
### Licensing Information
Since MiniPile is a subset of the Pile, the same MIT License holds.
### Citation Information
```
@article{kaddour2023minipile,
title={The MiniPile Challenge for Data-Efficient Language Models},
author={Kaddour, Jean},
journal={arXiv preprint arXiv:2304.08442},
year={2023}
}
@article{gao2020pile,
title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
|
McGill-NLP/feedbackQA | McGill-NLP | 2023-06-14T17:27:23Z | 235 | 15 | [
"license:apache-2.0",
"arxiv:2204.03025",
"region:us"
] | [] | 2022-03-10T23:50:07Z | 1 | ---
license: apache-2.0
---
# Dataset Card for FeedbackQA
[📄 Read](https://arxiv.org/pdf/2204.03025.pdf)<br>
[💾 Code](https://github.com/McGill-NLP/feedbackqa)<br>
[🔗 Webpage](https://mcgill-nlp.github.io/feedbackqa/)<br>
[💻 Demo](http://206.12.100.48:8080/)<br>
[🤗 Huggingface Dataset](https://huggingface.co/datasets/McGill-NLP/feedbackQA)<br>
[💬 Discussions](https://github.com/McGill-NLP/feedbackqa/discussions)
## Dataset Description
- **Homepage: https://mcgill-nlp.github.io/feedbackqa-data/**
- **Repository: https://github.com/McGill-NLP/feedbackqa-data/**
- **Paper:**
- **Leaderboard:**
- **Tasks: Question Answering**
### Dataset Summary
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users.
It has two parts: the first part contains a conventional RQA dataset,
whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
### Languages
English
## Dataset Creation
For each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected
from excellent, good, could be improved, bad, and a natural language explanation
elaborating on the strengths and/or weaknesses of the answer.
#### Initial Data Collection and Normalization
We scraped Covid-19-related content from official websites.
### Annotations
#### Who are the annotators?
Crowd-workers
### Licensing Information
Apache 2.0
### Contributions
[McGill-NLP](https://github.com/McGill-NLP)
|
TigerResearch/pretrain_zh | TigerResearch | 2023-06-14T13:50:32Z | 2,708 | 112 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-06-01T01:45:01Z | null | ---
dataset_info:
features:
- name: dataType
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: uniqueKey
dtype: string
- name: titleUkey
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 58043923125
num_examples: 16905023
download_size: 25662051889
dataset_size: 58043923125
---
# Dataset Card for "pretrain_zh"
[Tigerbot](https://github.com/TigerResearch/TigerBot) pretrain数据的中文部分。
包含(未压缩前) 中文书籍zh-books 12G, 中文互联网zh-webtext 25G, 中文百科zh-wiki 19G
更多语料请关注开源模型及持续更新 [https://github.com/TigerResearch/TigerBot](https://github.com/TigerResearch/TigerBot)
<p align="center" width="40%">
</p>
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/pretrain_zh')
``` |
speechbrain/common_language | speechbrain | 2023-06-12T13:29:01Z | 1,489 | 33 | [
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"language:ar",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fr",
"language:fy",
"language:ia",
"language:id",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:ky",
"language:lv",
"language:mn",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sl",
"language:sv",
"language:ta",
"language:tr",
"language:tt",
"language:uk",
"language:zh",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"audio-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ia
- id
- it
- ja
- ka
- kab
- ky
- lv
- mn
- mt
- nl
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sl
- sv
- ta
- tr
- tt
- uk
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- audio-classification
task_ids:
- speaker-identification
pretty_name: Common Language
language_bcp47:
- fy-NL
- rm-sursilv
- sv-SE
- zh-CN
- zh-HK
- zh-TW
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: sentence
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: language
dtype:
class_label:
names:
'0': Arabic
'1': Basque
'2': Breton
'3': Catalan
'4': Chinese_China
'5': Chinese_Hongkong
'6': Chinese_Taiwan
'7': Chuvash
'8': Czech
'9': Dhivehi
'10': Dutch
'11': English
'12': Esperanto
'13': Estonian
'14': French
'15': Frisian
'16': Georgian
'17': German
'18': Greek
'19': Hakha_Chin
'20': Indonesian
'21': Interlingua
'22': Italian
'23': Japanese
'24': Kabyle
'25': Kinyarwanda
'26': Kyrgyz
'27': Latvian
'28': Maltese
'29': Mangolian
'30': Persian
'31': Polish
'32': Portuguese
'33': Romanian
'34': Romansh_Sursilvan
'35': Russian
'36': Sakha
'37': Slovenian
'38': Spanish
'39': Swedish
'40': Tamil
'41': Tatar
'42': Turkish
'43': Ukranian
'44': Welsh
config_name: full
splits:
- name: train
num_bytes: 7116761
num_examples: 22194
- name: validation
num_bytes: 1855233
num_examples: 5888
- name: test
num_bytes: 1877970
num_examples: 5963
download_size: 3761951178
dataset_size: 10849964
---
# Dataset Card for common_language
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5036977
- **Repository:** https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
### Supported Tasks and Leaderboards
The baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):
https://github.com/speechbrain/speechbrain
### Languages
List of included languages:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file, and its label `language`. Additional fields include `age`, `client_id`, `gender` and `sentence`.
```python
{
'client_id': 'itln_trn_sp_175',
'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav',
'audio': {'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000},
'sentence': 'Con gli studenti è leggermente simile.',
'age': 'not_defined',
'gender': 'not_defined',
'language': 22
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`language` (`ClassLabel`): The language of the recording (see the `Languages` section above)
`sentence` (`string`): The sentence the user was prompted to speak
`age` (`string`): The age of the speaker.
`gender` (`string`): The gender of the speaker
### Data Splits
The dataset is already balanced and split into train, dev (validation) and test sets.
| Name | Train | Dev | Test |
|:---------------------------------:|:------:|:------:|:-----:|
| **# of utterances** | 177552 | 47104 | 47704 |
| **# unique speakers** | 11189 | 1297 | 1322 |
| **Total duration, hr** | 30.04 | 7.53 | 7.53 |
| **Min duration, sec** | 0.86 | 0.98 | 0.89 |
| **Mean duration, sec** | 4.87 | 4.61 | 4.55 |
| **Max duration, sec** | 21.72 | 105.67 | 29.83 |
| **Duration per language, min** | ~40 | ~10 | ~10 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
The Mongolian and Ukrainian languages are spelled as "Mangolian" and "Ukranian" in this version of the dataset.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[Ganesh Sinisetty; Pavlo Ruban; Oleksandr Dymov; Mirco Ravanelli](https://zenodo.org/record/5036977#.YdTZ5hPMJ70)
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{ganesh_sinisetty_2021_5036977,
author = {Ganesh Sinisetty and
Pavlo Ruban and
Oleksandr Dymov and
Mirco Ravanelli},
title = {CommonLanguage},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5036977},
url = {https://doi.org/10.5281/zenodo.5036977}
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset. |
deepmind/code_contests | deepmind | 2023-06-11T12:22:30Z | 12,785 | 165 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2203.07814",
"arxiv:2105.12655",
"region:us"
] | [
"translation"
] | 2022-07-19T16:02:55Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: codecontests
pretty_name: CodeContests
---
# Dataset Card for CodeContests
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/deepmind/code_contests/
- **Paper:** [Competition-Level Code Generation with AlphaCode](https://arxiv.org/abs/2203.07814v1)
- **Leaderboard:** [Code Generation on CodeContests](https://paperswithcode.com/sota/code-generation-on-codecontests)
- **Point of Contact:** [David Choi](mailto:[email protected])
### Dataset Summary
CodeContests is a competitive programming dataset for machine-learning. This
dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode).
It consists of programming problems, from a variety of sources:
Site | URL | Source
----------- | --------------------------- | ------
Aizu | https://judge.u-aizu.ac.jp | [CodeNet](https://github.com/IBM/Project_CodeNet)
AtCoder | https://atcoder.jp | [CodeNet](https://github.com/IBM/Project_CodeNet)
CodeChef | https://www.codechef.com | [description2code](https://github.com/ethancaballero/description2code)
Codeforces | https://codeforces.com | [description2code](https://github.com/ethancaballero/description2code) and Codeforces
HackerEarth | https://www.hackerearth.com | [description2code](https://github.com/ethancaballero/description2code)
Problems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages.
### Supported Tasks and Leaderboards
- `translation` - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is "percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available [here](https://paperswithcode.com/sota/code-generation-on-codecontests).
### Languages
English.
## Dataset Structure
### Data Instances
A data point corresponds to a singular contest problem:
```
{
'name': '76_B. Mice',
'description': 'Modern researches has shown that a flock of hungry mice '
'searching for a piece of...',
'public_tests': {'input': ['3 2 0 2\n0 1 3\n2 5\n'], 'output': ['1\n']},
'private_tests': {'input': ['20 18 1 2\n'
'-9999944 -9999861 -9999850 -9999763 -9999656 '
'-9999517 -9999375 -999927...',
...,
'7 11 10 20\n'
'6 18 32 63 66 68 87\n'
'6 8 15 23 25 41 53 59 60 75 90\n'],
'output': ['2\n', ..., '1\n']},
'generated_tests': {'input': ['7 11 10 5\n'
'6 18 32 63 66 68 87\n'
'6 8 15 23 25 41 53 59 60 75 90\n',
...,
'7 11 10 4\n'
'6 18 46 63 85 84 87\n'
'6 8 15 18 25 41 53 59 60 75 90\n'],
'output': ['1\n', ..., '2\n']},
'source': 2,
'difficulty': 8,
'solutions': {'language': [2, ..., 2],
'solution': ['#include <bits/stdc++.h>\n'
'using namespace std;\n'
'int n, m;\n'
'int data[2][100010], t[1...',
...,
'#include <bits/stdc++.h>\n'
'using namespace std;\n'
'int n, m, pos[100100], food[100100...']},
'incorrect_solutions': {'language': [2, ..., 2],
'solution': ['#include <bits/stdc++.h>\n'
'using namespace std;\n'
'vector<pair<int, int> > v[100010];...',
...,
'#include <bits/stdc++.h>\n'
'using namespace std;\n'
'vector<pair<int, int> > v[100010];...']},
'cf_contest_id': 76,
'cf_index': 'B',
'cf_points': 0.0,
'cf_rating': 2100,
'cf_tags': ['greedy', 'two pointers'],
'is_description_translated': False,
'untranslated_description': '',
'time_limit': {'seconds': 0, 'nanos': 500000000},
'memory_limit_bytes': 256000000,
'input_file': '',
'output_file': ''
}
```
### Data Fields
- `name`: The name of the contest. Note that names could agree between different sources.
- `description`: A natural language description of a programming problem.
- `public_tests`: Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired `input` and `output` that can be used to test potential solutions. They are therefore acceptable inputs to a model.
- `private_tests`: Private tests are not visible before submitting a solution, so should not be made available as inputs to a model.
- `generated_tests`: Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions.
- `source`: The original source of the problem, with possible values including `UNKNOWN_SOURCE` (0),`CODECHEF` (1), `CODEFORCES` (2), `HACKEREARTH` (3), `CODEJAM` (4), `ATCODER` (5) and `AIZU` (6).
- `difficulty`: A representation of the difficulty of the problem with possible values including `UNKNOWN_DIFFICULTY` (0), `EASY` (1), `MEDIUM` (2), `HARD` (3), `HARDER` (4), `HARDEST` (5), `EXTERNAL` (6), `A` (7), `B` (8), `C` (9), `D` (10), `E` (11), `F` (12), `G` (13), `H` (14), `I` (15), `J` (16), `K` (17), `L` (18), `M` (19), `N` (20), `O` (21), `P` (22), `Q` (23), `R` (24), `S` (25), `T` (26), `U` (27) and `V` (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, `cf_rating` is a more reliable measure of difficulty when available.
- `solutions`: Correct solutions to the problem. Contrast with `incorrect_solutions` below.
- `incorrect_solutions`: Incorrect solutions.
- `cf_contest_id`: The Contest ID. Note that Contest ID is not monotonic with respect to time.
- `cf_index`: Problem index, e.g. `"A"` or `"B"` or `"C"`.
- `cf_points`: Points for the problem, e.g. `1000.0`
- `cf_rating`: Problem rating (difficulty), e.g. `1100`
- `cf_tags`: Problem tags, e.g. `['greedy', 'math']`
- `is_description_translated`: Whether the problem was translated to English.
- `untranslated_description`: The untranslated description is only available for translated problems.
- `time_limit`: The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, `seconds` and `nanos`. This field is None if not defined.
- `memory_limit_bytes`: The memory limit constraint to use when executing solutions.
- `input_file`: Most problems use stdin for IO. Some problems expect specific files to be used instead.
- `output_file`: Most problems use stdout for IO. Some problems expect specific files to be used instead.
All tests are represented as a paired `input` and `output` that can be used to test potential solutions and all solutions comprise a `language`, with possible values including `UNKNOWN_LANGUAGE` (0), `PYTHON` (1) (solutions written in PYTHON2), `CPP` (2), `PYTHON3` (3) and `JAVA` (4), and a `solution` string written in that `language`. The fields preceded with `cf_` denote extra meta-data for Codeforces problems.
### Data Splits
The data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples.
## Dataset Creation
### Curation Rationale
This dataset was created for fine-tuning AlphaCode models:
> Models pre-trained on GitHub can generate good code and solve simple programming problems, but
as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning
the model on a dedicated competitive programming dataset is critical for performance.
### Source Data
#### Initial Data Collection and Normalization
The information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper.
#### Who are the source language producers?
The problems are scraped from the following platforms: [Aizu](https://judge.u-aizu.ac.jp), [AtCoder](https://atcoder.jp ), [CodeChef](https://www.codechef.com), [Codeforces](https://codeforces.com) and [HackerEarch](https://www.hackerearth.com). Additionally, some data from the existing public competitive programming dataset Description2Code ([Caballero et al., 2016](https://github.com/ethancaballero/description2code)) and CodeNet ([(Puri et al., 2021](https://arxiv.org/pdf/2105.12655.pdf)) is mixed into the training set.
### Annotations
#### Annotation process
The solutions are scapred alongside the problem descriptions.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals.
### Licensing Information
This dataset is made available under the terms of the CC BY
4.0 license ([Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/legalcode)).
Additional acknowledged contributions:
* Codeforces materials are sourced from http://codeforces.com.
* Description2Code materials are sourced from:
[Description2Code Dataset](https://github.com/ethancaballero/description2code),
licensed under the
[MIT open source license](https://opensource.org/licenses/MIT), copyright
not specified.
* CodeNet materials are sourced from:
[Project_CodeNet](https://github.com/IBM/Project_CodeNet), licensed under
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), copyright not
specified.
### Citation Information
```bibtex
@article{li2022competition,
title={Competition-Level Code Generation with AlphaCode},
author={Li, Yujia and Choi, David and Chung, Junyoung and Kushman, Nate and
Schrittwieser, Julian and Leblond, R{\'e}mi and Eccles, Tom and
Keeling, James and Gimeno, Felix and Dal Lago, Agustin and
Hubert, Thomas and Choy, Peter and de Masson d'Autume, Cyprien and
Babuschkin, Igor and Chen, Xinyun and Huang, Po-Sen and Welbl, Johannes and
Gowal, Sven and Cherepanov, Alexey and Molloy, James and
Mankowitz, Daniel and Sutherland Robson, Esme and Kohli, Pushmeet and
de Freitas, Nando and Kavukcuoglu, Koray and Vinyals, Oriol},
journal={arXiv preprint arXiv:2203.07814},
year={2022}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
silk-road/alpaca-data-gpt4-chinese | silk-road | 2023-05-23T05:33:21Z | 80 | 95 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"gpt",
"alpaca",
"fine-tune"
] | [
"text-generation"
] | 2023-05-23T02:10:49Z | null | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- gpt
- alpaca
- fine-tune
pretty_name: Alpaca-Data-GPT4-Chinese
size_categories:
- 10K<n<100K
--- |
tatsu-lab/alpaca | tatsu-lab | 2023-05-22T20:33:36Z | 48,449 | 754 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"instruction-finetuning"
] | [
"text-generation"
] | 2023-03-13T17:19:43Z | null | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Alpaca
task_categories:
- text-generation
---
# Dataset Card for Alpaca
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/tatsu-lab/stanford_alpaca
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Rohan Taori
### Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The data in Alpaca are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| alpaca | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
silk-road/Wizard-LM-Chinese-instruct-evol | silk-road | 2023-05-15T00:13:52Z | 77 | 97 | [
"task_categories:text-generation",
"task_categories:question-answering",
"language:zh",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"question-answering"
] | 2023-05-15T00:04:30Z | null | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Wizard-LM-Chinese是在MSRA的Wizard-LM数据集上,对指令进行翻译,然后再调用GPT获得答案的数据集
Wizard-LM包含了很多难度超过Alpaca的指令。
中文的问题翻译会有少量指令注入导致翻译失败的情况
中文回答是根据中文问题再进行问询得到的。
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [x] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 )
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
``` |
Multimodal-Fatima/VQAv2_test | Multimodal-Fatima | 2023-05-13T21:54:43Z | 10,371 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-17T21:59:25Z | null | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: image
dtype: image
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: answers
sequence: string
splits:
- name: test
num_bytes: 92151870512.0
num_examples: 447793
download_size: 18737258554
dataset_size: 92151870512.0
---
# Dataset Card for "VQAv2_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
musabg/wikipedia-oscar-tr | musabg | 2023-05-10T08:57:22Z | 12,291 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-09T15:49:57Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 74636783061.0
num_examples: 13847707
download_size: 41512074295
dataset_size: 74636783061.0
---
# Wikipedia and OSCAR Turkish Dataset
👋 Welcome to the "Wikipedia and OSCAR Turkish" Huggingface Repo!
📚 This repo contains a Turkish language dataset generated by merging Wikipedia and OSCAR cleaned Common Crawl. The dataset contains over 13 million examples with a single feature - text.
🔍 This dataset can be useful for natural language processing tasks in Turkish language.
📥 To download the dataset, you can use the Hugging Face Datasets library. Here's some sample code to get started:
from datasets import load_dataset
dataset = load_dataset("musabg/wikipedia-oscar-tr")
🤖 Have fun exploring this dataset and training language models on it! |
datablations/oscar-filter | datablations | 2023-05-10T06:58:28Z | 219,435 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-02-01T13:04:53Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: warc_headers
struct:
- name: warc-record-id
dtype: string
- name: warc-date
dtype: string
- name: content-type
dtype: string
- name: content-length
dtype: int32
- name: warc-type
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-block-digest
dtype: string
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float32
- name: annotations
sequence: string
- name: line_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float32
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: url
dtype: string
- name: domain
dtype: string
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: included_in_dedup
dtype: bool
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 3188486875748
num_examples: 431992659
download_size: 419397499659
dataset_size: 3188486875748
---
this is the one where we build the suffix array for 25% Oscar and only deduplicate that part - by deduplication I mean removing any document which has an at least 100-char span overlapping with another document in the 25% chunk. This is very strict and preserves only about 20 million documents, so less then 5% of the full Oscar. |
datablations/oscar-dedup-expanded | datablations | 2023-05-10T06:57:52Z | 117,762 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-02-10T18:42:08Z | null | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: warc_headers
struct:
- name: warc-record-id
dtype: string
- name: warc-date
dtype: string
- name: content-type
dtype: string
- name: content-length
dtype: int32
- name: warc-type
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-block-digest
dtype: string
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float32
- name: annotations
sequence: string
- name: line_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float32
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: url
dtype: string
- name: domain
dtype: string
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: included_in_dedup
dtype: bool
- name: cluster
sequence: int64
- name: has_dup_25
dtype: bool
splits:
- name: train
num_bytes: 3188540880787
num_examples: 431992659
download_size: 1732364041898
dataset_size: 3188540880787
---
Use the 25% suffix array to deduplicate the full Oscar, i.e. remove any document which has an at least 100-char span overlapping with the 25% chunk we selected in the previous bullet. This is more permissive and leaves us with 136 million documents or 31% of the original dataset. Also for reasons the explanation of which would probably involve terms like power laws, we still remove most of the most pervasive duplicates - so I'm pretty optimistic about this being useful.
|
TempoFunk/tempofunk-sdance | TempoFunk | 2023-05-07T07:38:48Z | 98,746 | 5 | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"language:en",
"license:agpl-3.0",
"size_categories:1K<n<10K",
"region:us"
] | [
"text-to-video",
"text-to-image",
"video-classification",
"image-classification"
] | 2023-04-19T05:08:11Z | null | ---
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1K<n<10K
license: agpl-3.0
---
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URLs, etc.) |
imvladikon/hebrew_speech_coursera | imvladikon | 2023-05-05T09:05:00Z | 325 | 7 | [
"task_categories:automatic-speech-recognition",
"language:he",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"automatic-speech-recognition"
] | 2022-03-02T23:29:22Z | 1 | ---
task_categories:
- automatic-speech-recognition
language:
- he
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 6670706136.352
num_examples: 20306
- name: validation
num_bytes: 1648062261.28
num_examples: 5076
download_size: 7726933856
dataset_size: 8318768397.632
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/89efd3a0fa3ead3f0b8e432e8796697a738d4561b24ff91f4fb2cc25d86e9fb0/train/ccef55189b7843d49110228cb0a71bfa115.wav',
'array': array([-0.01217651, -0.04351807, -0.06278992, ..., -0.00018311,
-0.00146484, -0.00349426]),
'sampling_rate': 16000},
'sentence': 'מצד אחד ובתנועה הציונית הצעירה'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 20306 | 5076 |
| hours | 28.88 | 7.23 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_coursera,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Coursera},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera},
}
```
### Contributions
[More Information Needed] |
EleutherAI/pile | EleutherAI | 2023-05-03T15:58:14Z | 1,162 | 417 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100B<n<1T",
"arxiv:2201.07311",
"arxiv:2101.00027",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license: other
multilinguality:
- monolingual
pretty_name: the Pile
size_categories:
- 100B<n<1T
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: the-pile
---
# Dataset Card for The Pile
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
This model card is a work in progress. Please also see [our datasheet](https://arxiv.org/abs/2201.07311) for more detailed info.
## Dataset Description
- **Homepage:** https://pile.eleuther.ai/
- **Repository:** https://github.com/EleutherAI/the-pile
- **Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
- **Leaderboard:**
- **Point of Contact:** [EleutherAI](mailto:[email protected])
- **Datasheet:** [Datasheet for the Pile](https://arxiv.org/abs/2201.07311)
### Dataset Summary
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is in English (`EN`).
## Dataset Structure
### Data Instances
#### all
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
<details>
<summary>Expand to see individual components</summary>
#### enron_emails
```
{
'text': 'Name\t\t\tNew Title\t\t\t\tEffective Date\t\t\tMid Year promotion Yes/No\n\nFloyd, Jodie\t\tSr Cust Svc Rep (no change)\t\t7/16/01\t\t\t\tNo\n\nBuehler, Craig\t\tSr Mkt/Sup Analyst (no change)\t\t7/16/01\t\t\t\tNo\n\nWagoner, Mike\t\tTeam Advisor - Gas Control\t\t7/1/01\t\t\t\tNo\n\nClapper, Karen\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nGreaney, Chris\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nWilkens, Jerry\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nMinton, Kevin\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nCox, Don\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nHanagriff, Richard\tSr Accounting Control Spec\t\t8/1/01\t\t\t\tYes\n\n\nThanks,\nMS'
'meta': "{}",
}
```
#### europarl
```
{
'text': 'Uvádění biocidních přípravků na trh - Nový návrh revize týkající se biocidních přípravků (rozprava) \nPředsedající\nDalším bodem je společná rozprava o následujících tématech:\nzpráva paní Sârbuové za Výbor pro životní prostředí, veřejné zdraví a bezpečnost potravin o návrhu...'
'meta': "{'language': 'cs'}",
}
```
#### free_law
```
{
'meta': "{'case_jurisdiction': 'scotus.tar.gz', 'case_ID': '110921.json','date_created': '2010-04-28T17:12:49Z'}",
'text': '\n461 U.S. 238 (1983)\nOLIM ET AL.\nv.\nWAKINEKONA\nNo. 81-1581.\nSupreme Court of United States.\nArgued...'
}
```
#### hacker_news
```
{
'text': "\nChina Deserves Donald Trump - rm2889\nhttps://www.nytimes.com/2019/05/21/opinion/china-trump-trade.html\n======\nNotPaidToPost\n> so he’d be wise to curb his nationalistic “no-one-tells-China-what-to-do”\n> bluster\n\nThis comment highlights both ignorance of Chinese history and continuing\nAmerican arrogance.\n\nChina has been painfully dictated what to do during the last 200 years. This\nhas had a profound effect on the country and has led to the collapse of\nimperial rule and the drive to 'rejuvenate'...",
'meta': "{'id': '19979654'}",
}
```
#### nih_exporter
```
{
'text': "The National Domestic Violence Hotline (NDVH) and the National Dating Abuse Helpline (NDAH), which are supported by the Division of Family Violence Prevention and Services within the Family and Youth Services Bureau, serve as critical partners in the intervention, prevention, and resource assistance efforts of the network of family violence, domestic violence, and dating violence service providers. They provide crisis intervention and support services; information about resources on domestic...",
'meta': " {'APPLICATION_ID': 100065}",
}
```
#### pubmed
```
{
'meta': {'pmid': 11409574, 'language': 'eng'},
'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age. Systematic review of the published literature. Out-patient clinics, emergency departments and hospitalisation wards in 23 health centres from 10 countries. Cohort studies reporting the frequency of hypoxaemia in children under 5 years of age with ALRI, and the association between hypoxaemia and the risk of dying. Prevalence of hypoxaemia measured in children with ARI and relative risks for the association between the severity of illness and the frequency of hypoxaemia, and between hypoxaemia and the risk of dying. Seventeen published studies were found that included 4,021 children under 5 with acute respiratory infections (ARI) and reported the prevalence of hypoxaemia. Out-patient children and those with a clinical diagnosis of upper ARI had a low risk of hypoxaemia (pooled estimate of 6% to 9%). The prevalence increased to 31% and to 43% in patients in emergency departments and in cases with clinical pneumonia, respectively, and it was even higher among hospitalised children (47%) and in those with radiographically confirmed pneumonia (72%). The cumulated data also suggest that hypoxaemia is more frequent in children living at high altitude. Three papers reported an association between hypoxaemia and death, with relative risks varying between 1.4 and 4.6. Papers describing predictors of hypoxaemia have focused on clinical signs for detecting hypoxaemia rather than on identifying risk factors for developing this complication. Hypoxaemia is a common and potentially lethal complication of ALRI in children under 5, particularly among those with severe disease and those living at high altitude. Given the observed high prevalence of hypoxaemia and its likely association with increased mortality, efforts should be made to improve the detection of hypoxaemia and to provide oxygen earlier to more children with severe ALRI.'
}
```
#### pubmed_central
```
{
'meta': "{id': 'PMC5595690'}",
'text': 'Introduction {#acel12642-sec-0001}\n============\n\nAlzheimer\\\'s disease (AD), the most common cause of...'
}
```
#### ubuntu_irc
```
{
'text': "#ubuntu 2004-07-05\n* Window 3\n* \tServer: [0] <None>\n* \tScreen: 0x817e90c\n* \tGeometry Info: [0 11 0 11 11 11] \n* \tCO, LI are [94 49] \n* \tCurrent channel: #ubuntu\n* \tQuery User: <None> \n*\tPrompt: <None>\n* \tSecond status line is OFF\n* \tSplit line is ON triple is OFF\n* \tLogging is ON\n* \tLogfile is irclogs/ubuntu.log\n* \tNotification is OFF\n* \tHold mode is OFF\n* \tWindow level is NONE\n* \tLastlog level is ALL\n* \tNotify level is ALL\n<mdz> lifeless: using tla effectively for all packages in Warty requ...",
'meta': "{'channel': 'ubuntu', 'month': 7}"
}
```
#### uspto
```
{
'text': "1. Field of the Invention\nIn an extensive plant breeding program, Grant Merrill, originator and now deceased, originated a large number of new and distinct varieties of fruit trees, and which included the herein-claimed variety of peach tree. Such plant breeding program was undertaken in originator's experimental orchard located near Exeter, Tulare County, Calif.\n2. Prior Varieties\nAmong the existent varieties of peach trees which were known to originator, particular reference is made to Gemfree (U.S. Plant Pat. No. 1,409) and June Lady (U.S. Plant Pat. No. 3,022) hereinafter mentioned for the purpose of comparison.",
'meta': "{'bibliographic_information': {'Patent Number': 'PP0049700', 'Series Code': '6', 'Application Number': '2845415', 'Application Type': '6', 'Art unit': '337', 'Application Filing Date': '19810720', 'Title of Invention': 'Peach tree (A3-10)', 'Issue Date': '19830104', 'Number of Claims': '1', 'Exemplary Claim Number(s)': '1', 'Primary Examiner': 'Bagwill; Robert E.', 'Number of Drawing Sheets': '1', 'Number of figures': '1'}, 'source_file': 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1983/pftaps19830104_wk01.zip', 'abstract': 'A peach tree which is large, vigorous, and spreading; foliated with large, lanceolate leaves having a finely serrate margin, a petiole of medium length and thickness, and medium size, reniform glands; blooms from medium size, conic, plump, pubescent buds; the flowers, medium in blooming period compared with other varieties, being of medium size, and pink; and is a regular and very productive bearer of medium but variable size, round truncate, clingstone fruit having yellow skin substantially overspread with red, yellow flesh mottled with red adjacent the skin, and an amber stone.', 'classifications': [{'OCL': ['Plt', '43'], 'EDF': ['3'], 'ICL': ['A01H', '503'], 'FSC': ['Plt'], 'FSS': ['43']}], 'inventors': [{'inventor name': 'Merrill, deceased; Grant', 'Street': '325 Breese Ave.', 'City': 'late of Red Bluff', 'State': 'CA'}, {'inventor name': 'Merrill, executrix; by Lucile B.', 'Street': '325 Breese Ave.', 'City': 'Red Bluff', 'State': 'CA', 'Zip code': '96080'}]}"
}
```
#### github
```
{
'text': "/* filesystem.c\n * Filesystem utility routines\n *\n * Wireshark - Network traffic analyzer\n * By Gerald Combs <[email protected]>\n * Copyright 1998 Gerald Combs\n *\n * SPDX-License-Identifier: GPL-2.0-or-later\n */\n\n#include <config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n\n#include <glib.h>...",
'meta': "{'repo_name': 'wireshark/wireshark', 'stars': '2789', 'repo_language': 'C', 'file_name': 'packet-mpeg-audio-template.c', 'mime_type': 'text/x-c'}"
}
```
</details>
### Data Fields
#### all
- `text` (str): Text.
- `meta` (dict): Metadata of the data instance with keys:
- pile_set_name: Name of the subset.
<details>
<summary>Expand to see individual components</summary>
#### enron_emails
- `text` (str): Text.
- `meta` (str): Metadata of the data instance.
#### europarl
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: language.
#### free_law
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.
#### hacker_news
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: id.
#### nih_exporter
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: APPLICATION_ID.
#### pubmed
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: pmid, language.
#### pubmed_central
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: ID of the data instance.
#### ubuntu_irc
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: channel, month.
#### uspto
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications,
inventors.
#### github
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: repo_name, stars, repo_language, file_name, mime_type.
### Data Splits
The "all" configuration is composed of 3 splits: train, validation and test.
</details>
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was primarily curated by Leo Gao and Stella Biderman, with assistance from other authors of the Pile paper.
### Licensing Information
Please refer to the specific license depending on the subset you use:
- PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
```
@article{gao2020pile,
title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
@article{biderman2022datasheet,
title={Datasheet for the pile},
author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
journal={arXiv preprint arXiv:2201.07311},
year={2022}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
ghoskno/laion-art-en-colorcanny | ghoskno | 2023-04-30T13:48:47Z | 14,845 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-30T05:14:10Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 507481937115.0
num_examples: 2639345
download_size: 48871327240
dataset_size: 507481937115.0
---
# Dataset Card for "laion-art-en-colorcanny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jkot/dataset_merged_preprocesssed_v2 | jkot | 2023-04-28T20:06:15Z | 10,470 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-28T16:23:57Z | null | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 229523006640
num_examples: 238899
- name: test
num_bytes: 12170045648
num_examples: 12669
download_size: 72324319243
dataset_size: 241693052288
---
# Dataset Card for "dataset_merged_preprocesssed_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jkot/parliament_hearings_processed | jkot | 2023-04-25T08:53:38Z | 20,640 | 1 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-21T10:06:00Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 51234859011.0
num_examples: 191455
- name: test
num_bytes: 762989296.0
num_examples: 2726
download_size: 51507735963
dataset_size: 51997848307.0
---
# Preprocessed parliament hearings ASR dataset to truecased form.
## Original dataset: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3126
---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: string
splits:
- name: train
num_bytes: 53645064353.18
num_examples: 191455
- name: test
num_bytes: 740331298.0
num_examples: 2726
download_size: 51507379112
dataset_size: 54385395651.18
--- |
nomic-ai/gpt4all-j-prompt-generations | nomic-ai | 2023-04-24T15:20:43Z | 281 | 221 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-10T21:59:10Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1774285641
num_examples: 808812
download_size: 990673616
dataset_size: 1774285641
license: apache-2.0
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for [GPT4All-J Prompt Generations]
## Dataset Description
Dataset used to train [GPT4All-J](https://huggingface.co/nomic-ai/gpt4all-j) and [GPT4All-J-LoRA](https://huggingface.co/nomic-ai/gpt4all-j-lora)
We release several versions of datasets
- **v1.0:** The original dataset we used to finetune GPT-J on
- **v1.1-breezy**: A filtered dataset where we removed all instances of `AI language model`
- **v1.2-jazzy**: A filtered dataset where we also removed instances like `I'm sorry, I can't answer...` and `AI language model`
- **v1.3-groovy**: The v1.2 dataset with ShareGPT and Dolly added with ~8% of semantic duplicates removed from the dataset using [Atlas](https://atlas.nomic.ai/)
The dataset defaults to `main` which is `v1.0`. To download a specific version, you can pass an argument to the keyword `revision` in `load_dataset`:
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
- **Homepage:** [gpt4all.io](https://gpt4all.io/)
- **Repository:** [gpt4all](https://github.com/nomic-ai/gpt4all)
- **Paper:** [Technical Report](https://static.nomic.ai/gpt4all/2023_GPT4All-J_Technical_Report_2.pdf)
- **Atlas Map:** [Map of Prompts](https://atlas.nomic.ai/map/gpt4all-j-prompts-curated) and [Responses](https://atlas.nomic.ai/map/gpt4all-j-response-curated) |
wangrui6/Zhihu-KOL | wangrui6 | 2023-04-23T13:26:03Z | 316 | 233 | [
"task_categories:question-answering",
"language:zh",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2023-02-25T00:21:29Z | null | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 2295601241
num_examples: 1006218
download_size: 1501204472
dataset_size: 2295601241
task_categories:
- question-answering
language:
- zh
---
# Dataset Card for "Zhihu-KOL"
Zhihu data for training Open Assitant
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fnlp/moss-002-sft-data | fnlp | 2023-04-20T16:17:16Z | 122 | 95 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"arxiv:2212.10560",
"region:us"
] | [
"conversational",
"text-generation"
] | 2023-04-20T10:14:09Z | null | ---
license: cc-by-4.0
task_categories:
- conversational
- text-generation
language:
- en
- zh
size_categories:
- 1M<n<10M
---
# Dataset Card for "moss-002-sft-data"
## Dataset Description
- **Homepage:** [https://txsun1997.github.io/blogs/moss.html](https://txsun1997.github.io/blogs/moss.html)
- **Repository:** [https://github.com/OpenLMLab/MOSS](https://github.com/OpenLMLab/MOSS)
- **Total amount of disk used:** 2.16 GB
### Dataset Summary
An open-source conversational dataset that was used to train MOSS-002. The user prompts are extended based on a small set of human-written seed prompts in a way similar to [Self-Instruct](https://arxiv.org/abs/2212.10560). The AI responses are generated using `text-davinci-003`. The user prompts of `en_harmlessness` are from [Anthropic red teaming data](https://github.com/anthropics/hh-rlhf/tree/master/red-team-attempts).
### Data Splits
| name | \# samples |
|----------------------|-----------:|
| en_helpfulness.json | 419049 |
| en_honesty.json | 112580 |
| en_harmlessness.json | 38873 |
| zh_helpfulness.json | 447750 |
| zh_honesty.json | 142885 |
|
gokuls/processed_train_coco | gokuls | 2023-04-19T17:16:18Z | 11,734 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-04-19T10:45:00Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 60520900000
num_examples: 100000
download_size: 18447379186
dataset_size: 60520900000
---
# Dataset Card for "processed_train_coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BelleGroup/school_math_0.25M | BelleGroup | 2023-04-08T03:55:03Z | 257 | 104 | [
"task_categories:text2text-generation",
"language:zh",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2023-04-02T06:57:09Z | null | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
# School Math 0.25M
## 内容
包含约25万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文数学题数据,包含解题过程。
注意:此数据集是由ChatGPT产生的,未经过严格校验,题目或解题过程可能包含错误。使用过程中请注意这一点。
## 样例
```
{
"instruction": "题目:小华手里有一个装满糖果的袋子,共有12个,小明想知道里面有多少颗糖果,于是他问小华:“你手里的糖果袋子里有偶数个糖果吗?”小华回答:“有,而且多于10颗。”请问小华手里的糖果袋子里最少有几颗糖果?",
"input": "",
"output": "\n由题目可知:小华手里的糖果袋子里有偶数个糖果;\n又知道小华手里的糖果袋子里有多于10颗糖果。\n因为糖果数为偶数,多于10颗,所以糖果数最小必须是12。\n所以小华手里的糖果袋子里最少有12颗糖果。"
}
```
### 字段
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 局限性和使用限制
我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。
***
# School Math 0.25M
## Contents
Includes approx. 250k Chinese school math quizzes and answers generated by BELLE, with steps to solve the quizzes.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
```
{
"instruction": "题目:小华手里有一个装满糖果的袋子,共有12个,小明想知道里面有多少颗糖果,于是他问小华:“你手里的糖果袋子里有偶数个糖果吗?”小华回答:“有,而且多于10颗。”请问小华手里的糖果袋子里最少有几颗糖果?",
"input": "",
"output": "\n由题目可知:小华手里的糖果袋子里有偶数个糖果;\n又知道小华手里的糖果袋子里有多于10颗糖果。\n因为糖果数为偶数,多于10颗,所以糖果数最小必须是12。\n所以小华手里的糖果袋子里最少有12颗糖果。"
}
```
### Schema
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project. |
BelleGroup/train_2M_CN | BelleGroup | 2023-04-08T03:33:28Z | 250 | 107 | [
"task_categories:text2text-generation",
"language:zh",
"license:gpl-3.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2023-04-08T02:58:58Z | null | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 1M<n<10M
---
## 内容
包含约200万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "将以下三个句子组合成一个有意义的段落。\n狗是人类最好的朋友。它们非常聪明,可以进行各种活动。如果你喜欢散步,狗可以成为你一起散步的伙伴。",
"input": "",
"output": "狗是人类最好的朋友,它们非常聪明,可以进行各种活动。如果你喜欢散步,狗可以成为你一起散步的伙伴。出门散步是一种良好的锻炼方式,而有狗的陪伴会让散步变得更有趣,并且有狗在身边也能给你带来安全感。所以,拥有一只狗作为你的伙伴,可以帮助你变得更加积极主动和健康。"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。 |
medalpaca/medical_meadow_medqa | medalpaca | 2023-04-06T16:59:02Z | 668 | 95 | [
"task_categories:question-answering",
"language:en",
"language:zh",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"medical"
] | [
"question-answering"
] | 2023-04-06T16:56:15Z | null | ---
task_categories:
- question-answering
language:
- en
- zh
tags:
- medical
---
# Dataset Card for MedQA
## Dataset Description
- **Paper:**
### Dataset Summary
This is the data and baseline source code for the paper: Jin, Di, et al. "What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams."
From https://github.com/jind11/MedQA:
>The data that contains both the QAs and textbooks can be downloaded from [this google drive folder](https://drive.google.com/file/d/1ImYUSLk9JbgHXOemfvyiDiirluZHPeQw/view?usp=sharing). A bit of details of data are explained as below:
>
> For QAs, we have three sources: US, Mainland of China, and Taiwan District, which are put in folders, respectively. All files for QAs are in jsonl file format, where each line is a data sample as a dict. The "XX_qbank.jsonl" files contain all data samples while we also provide an official random split into train, dev, and test sets. Those files in the "metamap" folders are extracted medical related phrases using the Metamap tool.
>
> For QAs, we also include the "4_options" version in for US and Mainland of China since we reported results for 4 options in the paper.
>
> For textbooks, we have two languages: English and simplified Chinese. For simplified Chinese, we provide two kinds of sentence spliting: one is split by sentences, and the other is split by paragraphs.
### Citation Information
```
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
``` |
EdinburghNLP/xsum | EdinburghNLP | 2023-04-05T13:45:25Z | 39,758 | 113 | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1808.08745",
"region:us"
] | [
"summarization"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Extreme Summarization (XSum)
paperswithcode_id: xsum
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 479206608
num_examples: 204045
- name: validation
num_bytes: 26292901
num_examples: 11332
- name: test
num_bytes: 26756165
num_examples: 11334
download_size: 257302866
dataset_size: 532255674
---
# Dataset Card for "xsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/EdinburghNLP/XSum
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
- **Point of Contact:** [Shashi Narayan](mailto:[email protected])
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
BelleGroup/train_1M_CN | BelleGroup | 2023-04-03T08:23:17Z | 336 | 149 | [
"task_categories:text2text-generation",
"language:zh",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2023-03-31T08:53:50Z | null | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约100万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
BelleGroup/train_0.5M_CN | BelleGroup | 2023-04-03T08:11:22Z | 477 | 108 | [
"task_categories:text2text-generation",
"language:zh",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text2text-generation"
] | 2023-03-31T10:17:49Z | null | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约50万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
RyokoAI/ShareGPT52K | RyokoAI | 2023-04-02T13:16:51Z | 471 | 322 | [
"task_categories:text-generation",
"language:en",
"language:es",
"language:de",
"language:multilingual",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"region:us",
"conversation",
"rlhf",
"chatgpt",
"gpt-3.5"
] | [
"text-generation"
] | 2023-04-01T19:59:59Z | null | ---
license: cc0-1.0
language:
- en
- es
- de
- multilingual
pretty_name: ShareGPT 90K Conversations
task_categories:
- text-generation
tags:
- conversation
- rlhf
- chatgpt
- gpt-3.5
size_categories:
- 10K<n<100K
---
# Dataset Card for ShareGPT~~52K~~90K
## Dataset Description
- **Homepage: N/A**
- **Repository: N/A**
- **Paper: N/A**
- **Leaderboard: N/A**
- **Point of Contact:** Ronsor Labs <[email protected]>
### Dataset Summary
This dataset is a collection of approximately ~~52,000~~**90,000** conversations scraped via the ShareGPT API before it was shut down.
These conversations include both user prompts and responses from OpenAI's ChatGPT.
This repository now contains the new 90K conversations version. The previous 52K may
be found in the `old/` directory.
### Supported Tasks and Leaderboards
* text-generation
### Languages
This dataset is expected to primarily consist of messages in English and other Western languages.
## Dataset Structure
### Data Instances
```json
{
"id": "2wtLdZm",
"conversations": [
{
"from": "human",
"value": "Are you familiar with the SAS programming language?"
},
{
"from": "gpt",
"value": "<div class=\"markdown prose w-full break-words dark:prose-invert light\"><p>Yes, I am familiar with the SAS programming language. SAS is a statistical software suite that is widely used in data analysis, data management, and business intelligence applications. It has its own programming language, which is used to manipulate and analyze data, generate reports, and automate data-driven workflows. The language is known for its ease of use and powerful capabilities, and it is commonly used in industries such as healthcare, finance, and government.</p></div>"
},
{
"from": "human",
"value": "Can you write a SAS proc that imports a dataset from a CSV saved on my local machine?"
}
]
}
```
### Data Fields
* **id**: conversation id
* **conversations**: conversation session array
* **from**: ChatGPT ("gpt") or the user ("human")
* **value**: message contents as raw HTML
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
This is a decently large dataset of realistic human-AI conversations which I believe should be released
to the research community.
### Source Data
#### Initial Data Collection and Normalization
This data was collected using the ShareGPT API.
#### Who are the source language producers?
ShareGPT users and OpenAI ChatGPT.
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
This dataset *may* contain personal information, if ShareGPT users were sending such information to
ChatGPT. ChatGPT warns users not to submit personal information to it, however, so without further
evaluation, we believe that this dataset should contain little or no personal information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset may be used to train models that are competitive with OpenAI's ChatGPT. Please filter
this dataset first, as it may contain canned responses, raw HTML, and other undesirable information.
### Discussion of Biases
This dataset exhibits all the biases of OpenAI's ChatGPT models (GPT-3.5 and GPT-4) as well as the
biases of the users who uploaded the conversations.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
None.
### Licensing Information
**CC0: No Rights Reserved.**
The output of machine learning algorithms is uncopyrightable in the United States and other jurisdictions.
**Additionally, the OpenAI terms of service do not apply to this dataset as users of this dataset
are not accessing the OpenAI service.**
### Citation Information
TODO
### Contributions
These conversations were allegedly scraped by an anonymous user on 4chan.
The 90K version was sourced from [this post](https://boards.4channel.org/g/thread/92487155/lmg-local-models-general-snail-edition#p92490887).
Thanks, anon!
|
tasksource/mmlu | tasksource | 2023-03-31T20:44:21Z | 51,212 | 34 | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"multi-task",
"multitask",
"mmlu",
"hendrycks_test"
] | [
"text-classification",
"multiple-choice",
"question-answering"
] | 2023-02-01T10:20:16Z | null | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
- closed-domain-qa
language:
- en
tags:
- multi-task
- multitask
- mmlu
- hendrycks_test
pretty_name: mmlu
---
MMLU (`hendrycks_test` on huggingface) without auxiliary train. It is much lighter (7MB vs 162MB) and faster than the original implementation, in which auxiliary train is loaded (+ duplicated!) by default for all the configs in the original version, making it quite heavy.
We use this version in [tasksource](https://huggingface.co/tasksource).
Reference to original dataset:
Measuring Massive Multitask Language Understanding - https://github.com/hendrycks/test
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
``` |
eminorhan/llm-memory | eminorhan | 2023-03-31T00:38:46Z | 78,172 | 1 | [
"license:mit",
"arxiv:2303.17557",
"region:us"
] | [] | 2023-03-23T16:07:14Z | null | ---
license: mit
---
This repository contains the results of all experiments (inlcuding every single hyperparameter run) reported in the following paper:
Orhan AE (2023) [Recognition, recall, and retention of few-shot memories in large language models.](https://arxiv.org/abs/2303.17557) arXiv:2303.17557.
A brief description of the directories included in this repository:
* [`evals`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/evals): contains the results of all recognition experiments
* [`recalls`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/recalls): contains the results of all recall experiments
* [`re-evals`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/re-evals): contains the results of all recognition experiments during the retention phase
* [`re-recalls`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/re-recalls): contains the results of all recall experiments during the retention phase
* [`scratch-evals`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/scratch-evals), [`scratch-recalls`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/scratch-recalls), [`scratch-re-evals`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/scratch-re-evals), [`scratch-re-recalls`](https://huggingface.co/datasets/eminorhan/llm-memory/tree/main/scratch-re-recalls): similar to the above, but the results are for the `gpt-j-6B-st` model trained from scratch on [`wikitext-103-raw-v1`](https://huggingface.co/datasets/wikitext). |
laion/OIG | laion | 2023-03-31T00:06:28Z | 8,176 | 303 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | [] | 2023-03-05T00:34:58Z | null | ---
license: apache-2.0
---
# This is the Open Instruction Generalist Dataset
This is our attempt to create a large instruction dataset of medium quality along with a smaller high quality instruciton dataset (OIG-small-chip2).
The data is in the form of jsonl objects, with at least a 'text' field. Some datasets may also include a 'metadata' field. The 'text' field contains a string of the form of one or more of:
- \<human\>: instruction\n\<bot\>: response
- \<human\>: instruction\n\<bot\>: response .. \<human\>: instruction\n\<bot\>: response
The purpose of the larger dataset is to perform continued pre-training, followed by a finetune on the smaller high quality dataset.
The purpose of the smaller OIG-small-chip2 dataset is to make it easy to convert a language model pretrained on large amounts of text into an instruction following model using a small amount of additional compute via finetuning or softprompt tuning.
Many additional datasets are being prepared by various community members and will be incorporated into this dataset as we are able to verify the quality and formatting of the data. Our goal is to make helpful and non-toxic instruction tuned models available to everyone.
OIG is currently at 44M. We will continue to publish ever larger diverse instruction datasets with the goal of creating 1 trillion tokens of diverse instructions - enough to pretrain an LLM from scratch.
It is best to download the individual jsonl files directly that you wish to use instead of using HF load_datasets. https://huggingface.co/datasets/laion/OIG/tree/main
## unified_abstract_infill.jsonl (~232000)
dbpedia and wikipedia snippets combined with a small portion of https://github.com/google-research/dialog-inpainting
## unified_basic.jsonl (30)
## unified_conv_finqa.jsonl (~9000)
https://github.com/czyssrs/ConvFinQA
## unified_cuad.jsonl (~500)
https://www.atticusprojectai.org/cuad
## unified_essays.jsonl (~2000)
- essays available on the public web
## unified_grade_school_math_instructions.jsonl (~9000)
- https://github.com/openai/grade-school-math
## unified_hc3_human.jsonl (~58000)
## unified_image_prompts_instructions.jsonl (~15000)
- A very small subset of LAION-400M
## unified_joke_explanations.jsonl (356)
- Crawled from public internet.
## unified_mathqa_flanv2_kojma_cot.jsonl (~107000)
- https://huggingface.co/datasets/math_qa,
## unified_merged_code_xp3.jsonl (~67000)
- https://huggingface.co/datasets/bigscience/xP3
## unified_multi_news.jsonl (~90000)
- https://www.tensorflow.org/datasets/catalog/multi_news
## unified_multi_sum.jsonl (~1700000)
## unified_nq.jsonl (~307000)
## unified_openai_summarize_tldr.jsonl (~233000)
- https://github.com/openai/summarize-from-feedback
## unified_oscar_en_sample_dialog.jsonl (~2670000)
- https://oscar-project.org/
- https://huggingface.co/datasets/TurkuNLP/register_oscar
## unified_plot_screenplay_books_dialog.jsonl (~8000)
- https://github.com/markriedl/WikiPlots extracted from Wikipedia, snippets from the Pile’s https://huggingface.co/datasets/the_pile_books3, and snippets of screenplays available on the public web.
## unified_sqlv1.jsonl (~17000)
- public text 2 sql datasets.
## unified_sqlv2.jsonl(~24000)
- public text 2 sql datasets.
## unified_squad_v2.jsonl (~19000)
- https://rajpurkar.github.io/SQuAD-explorer/
## unified_squad_v2_more_neg.jsonl (~19000)
- https://rajpurkar.github.io/SQuAD-explorer/
## unified_ul2_plus_oscar_en_sample_dialog.jsonl (~2900000)
- https://oscar-project.org/
- https://huggingface.co/datasets/TurkuNLP/register_oscar
## unified_unifiedskg_instructions.jsonl (~223000)
- https://github.com/HKUNLP/UnifiedSKG
## unified_unnatural_instructions.jsonl (~238000)
- https://github.com/orhonovich/unnatural-instructions
## unified_xp3_sample.jsonl (~188000)
- https://huggingface.co/datasets/bigscience/xP3
## unified_canadian_parliament.jsonl(~301000)
- https://openparliament.ca/data-download/
## unified_poetry_2_song.jsonl (~12000)
- https://huggingface.co/datasets/merve/poetry
- https://huggingface.co/datasets/matthh/gutenberg-poetry-corpus
## unified_flan.jsonl (~2700000)
- https://github.com/google-research/FLAN/tree/main/flan/v2
## unified_ni.jsonl (~256000)
- https://github.com/allenai/natural-instructions
## unified_p3.jsonl (~31000000)
- https://huggingface.co/datasets/bigscience/P3
## unified_soda_dialog.jsonl (~1200000)
- https://huggingface.co/datasets/allenai/soda
## unified_rallio_soda_upgraded_2048.jsonl (~210000)
- https://huggingface.co/datasets/allenai/soda
- a newer version of the unified_soda_dialog dataset, with multiple dialogs on one line
- recommend to use either the unified_soda_dailog.jsonl or unified_rallio_soda_upgraded_2048, and not both.
## unified_rallio_safety_and_prosocial.jsonl (~319000)
- Generated from public datasets and generated from Wiki similar to the chip2 data
- Find a full list in the end of the document
- This dataset also includes https://huggingface.co/datasets/allenai/prosocial-dialog and https://huggingface.co/datasets/Anthropic/hh-rlhf
## unified-chip2.jsonl / OIG-small-chip2 (~210000):
This dataset was created as part of the LAION OA effort by @rallio67 and other members of the LAION contributors. It is a high quality dataset intended to be mixed into a large pre-train dataset and can be used for a final finetune. Chip2 contains:
### Python Code Examples (~6,000):
A set of instruction / response pairs where the User requests the agent to generate a python function. These examples were generated using a large language model and few shot prompting with python code verified to execute. There are also ~3000 examples of manually curated one line python code examples from the Conala publication (see: https://conala-corpus.github.io/)
### Natural Instruction Examples (~124,000):
A balanced set of diverse natural and factual questions and answers made using few shot prompted UL2 20B and an instruction tuned GPT-NeoX-20B model (Chip) and then rejection sampled using multiple automatic evaluations to remove low quality outputs and to filter out factually inaccurate answers. Also includes some filtered natural instructions from Anthropic Helpful instructions (see: https://github.com/anthropics/hh-rlhf).
### Generic Harmless Instruction Examples (~6,500):
A set of instruction / response pairs sourced from the Anthropic redteam paper github (see: https://github.com/anthropics/hh-rlhf). This dataset includes a lot of data regarding real humans trying to make the Anthropic language models say harmful/toxic/trolling things. For this dataset only examples that were rated lowly on the harmful scale (0,1,2 out of 4, where 4 is the most toxic) were included. Again, only the first lines of dialogue (instruction, first_agent_response) were retained.
### Instruction/Responses with Lists (~14,000):
A set of filtered and reformatted instruction / response pairs where the agent response contains a list. Sourced from the Anthropic github (see: https://github.com/anthropics/hh-rlhf). Sourced from wikihow text lists created by b-mc2 (https://huggingface.co/datasets/b-mc2/wikihow_lists). And rejection filtered instruction response pairs generated by Chip20B that contained lists. All lists are formatted in a similar style.
### Follow-up questions (~12,500):
Examples of instructions and responses where an appropriate response is to ask for more information from the prompter. These examples were generated from a combination of few shot prompted UL2 20B (to generate natural questions) and a large dialogue prompted language model to generate the responses containing follow-up questions.
### Wikipedia Toxic Adversarial Questions (~12,000):
Questions and answers generated from wikipedia articles that discuss potentially sensitive topics (flagged as potentially toxic by an early toxicity detection model).
### Grade School Math GSM8K (~9,000):
GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. The dataset is segmented into 7.5K training problems and 1K test problems. These problems take between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer. A bright middle school student should be able to solve every problem. It can be used for multi-step mathematical reasoning. (https://github.com/openai/grade-school-math)
### Reasoning Instructions (~4,500):
Examples from the Com2Sense and Strategy QA datasets that were reformatted into natural instructions using large language models with few shot prompting and additional quality filtering steps.
### Character and Scene Descriptions (~30,000):
Examples of instructions and responses for the generation of character or scene descriptions. Scenes were sourced from video game wikis and reformatted into instruction / response format using large language models or generated by few shot prompting with large language models.
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/LAION-AI/Open-Instruction-Generalist
Join our Discord to talk with other team members working on this! https://discord.gg/xBPBXfcFHd
## Update: March 20, 2023
- Added the metadata column to all datasets to alleviate issues with HF datasets loader.
- Broke some of the p3 dialogs into parts for ease of loading.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The OIG dataset that is authored by LAION volunteers is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as Wikipedia data which is licensed under CC-BY-SA, or web-crawled data which is used under fair use principles.
## Acknowledgement
- We would like to thank all of our amazing LAION volunteers including: @Rallio, @Jue, @Ce Zhang, @Player-1, @Laurel, @danielpatrickhug, @Jjmachan, @Mylo, @Khalid, @Coco.han, @Jordiclive, @Pszemraj, all volunteers from the Open Assistant project who initially created synthetic data, and many others.
- We would like to thank Together for their tireless dedication to the open source and AI community and their contribution to many of the datasets.
- We would like to thank AI Horde and user @Db0 for their incredible contribution of filtered data that were flagged as unethical.
- Please check out our related project: https://github.com/LAION-AI/Open-Assistant for our work in human feedback gathering and RLHF.
- Lastly, Ontocord.ai’s founders are grateful to have the opportunity to create a portion of the data augmentation and safety-moderation code for this project.
|
spdenisov/tokenized_udtrees_trunc | spdenisov | 2023-03-30T23:05:12Z | 51,068 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-28T22:21:32Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
- name: length
dtype: int64
splits:
- name: fr_0
num_bytes: 72813504
num_examples: 34912
- name: fr_1
num_bytes: 106992505
num_examples: 34884
- name: fr_2
num_bytes: 118066880
num_examples: 34858
- name: fr_3
num_bytes: 103747628
num_examples: 34886
- name: fr_4
num_bytes: 179954204
num_examples: 33724
- name: fr_5
num_bytes: 142682805
num_examples: 34681
- name: fr_6
num_bytes: 103669700
num_examples: 34887
- name: ar_0
num_bytes: 76392970
num_examples: 21341
- name: ar_1
num_bytes: 99682724
num_examples: 20211
- name: ar_2
num_bytes: 104828728
num_examples: 20561
- name: ar_3
num_bytes: 120387755
num_examples: 18591
- name: ar_4
num_bytes: 110845444
num_examples: 15239
- name: ar_5
num_bytes: 113333216
num_examples: 19622
- name: ar_6
num_bytes: 97966198
num_examples: 20004
- name: nl_0
num_bytes: 17678650
num_examples: 12289
- name: nl_1
num_bytes: 23522345
num_examples: 12289
- name: nl_2
num_bytes: 24563294
num_examples: 12289
- name: nl_3
num_bytes: 41551823
num_examples: 12274
- name: nl_4
num_bytes: 31583112
num_examples: 12289
- name: nl_5
num_bytes: 29817348
num_examples: 12289
- name: nl_6
num_bytes: 32965583
num_examples: 12287
- name: de_0
num_bytes: 295802185
num_examples: 166848
- name: de_1
num_bytes: 390229614
num_examples: 166845
- name: de_2
num_bytes: 411788885
num_examples: 166844
- name: de_3
num_bytes: 406127223
num_examples: 166845
- name: de_4
num_bytes: 794559733
num_examples: 166061
- name: de_5
num_bytes: 500383319
num_examples: 166830
- name: de_6
num_bytes: 362580545
num_examples: 166846
- name: ru_0
num_bytes: 150571543
num_examples: 89515
- name: ru_1
num_bytes: 195170653
num_examples: 89496
- name: ru_2
num_bytes: 199557398
num_examples: 89494
- name: ru_3
num_bytes: 175089824
num_examples: 89505
- name: ru_4
num_bytes: 385862504
num_examples: 88402
- name: ru_5
num_bytes: 239909307
num_examples: 89442
- name: ru_6
num_bytes: 254396827
num_examples: 89380
- name: pt_0
num_bytes: 33205205
num_examples: 30720
- name: pt_1
num_bytes: 43209797
num_examples: 30720
- name: pt_2
num_bytes: 45343903
num_examples: 30720
- name: pt_3
num_bytes: 44359504
num_examples: 30720
- name: pt_4
num_bytes: 63212871
num_examples: 30720
- name: pt_5
num_bytes: 53727187
num_examples: 30720
- name: pt_6
num_bytes: 39674213
num_examples: 30720
- name: ro_0
num_bytes: 17993349
num_examples: 8041
- name: ro_1
num_bytes: 23770442
num_examples: 8035
- name: ro_2
num_bytes: 24600913
num_examples: 8032
- name: ro_3
num_bytes: 27929669
num_examples: 8023
- name: ro_4
num_bytes: 48677219
num_examples: 7799
- name: ro_5
num_bytes: 29549023
num_examples: 8015
- name: ro_6
num_bytes: 21594484
num_examples: 8038
- name: hy_0
num_bytes: 12162343
num_examples: 3129
- name: hy_1
num_bytes: 13197354
num_examples: 3096
- name: hy_2
num_bytes: 11443297
num_examples: 3149
- name: hy_3
num_bytes: 10501791
num_examples: 3161
- name: hy_4
num_bytes: 16496323
num_examples: 2884
- name: hy_5
num_bytes: 12602551
num_examples: 3107
- name: hy_6
num_bytes: 10501791
num_examples: 3161
- name: en_0
num_bytes: 39190941
num_examples: 28685
- name: en_1
num_bytes: 54446758
num_examples: 28682
- name: en_2
num_bytes: 60866411
num_examples: 28681
- name: en_3
num_bytes: 57413241
num_examples: 28682
- name: en_4
num_bytes: 84543655
num_examples: 28628
- name: en_5
num_bytes: 73953982
num_examples: 28648
- name: en_6
num_bytes: 73215142
num_examples: 28626
- name: hu_0
num_bytes: 2242786
num_examples: 910
- name: hu_1
num_bytes: 2840123
num_examples: 910
- name: hu_2
num_bytes: 2835274
num_examples: 910
- name: hu_3
num_bytes: 2500576
num_examples: 910
- name: hu_4
num_bytes: 4799115
num_examples: 889
- name: hu_5
num_bytes: 3547088
num_examples: 908
- name: hu_6
num_bytes: 2500576
num_examples: 910
- name: tr_0
num_bytes: 75249383
num_examples: 60088
- name: tr_1
num_bytes: 83604892
num_examples: 60087
- name: tr_2
num_bytes: 83243895
num_examples: 60087
- name: tr_3
num_bytes: 74806746
num_examples: 60088
- name: tr_4
num_bytes: 148074211
num_examples: 60006
- name: tr_5
num_bytes: 98925962
num_examples: 60083
- name: tr_6
num_bytes: 74242806
num_examples: 60088
- name: it_0
num_bytes: 46804518
num_examples: 21711
- name: it_1
num_bytes: 66265256
num_examples: 21655
- name: it_2
num_bytes: 70151753
num_examples: 21637
- name: it_3
num_bytes: 63960323
num_examples: 21667
- name: it_4
num_bytes: 100412869
num_examples: 20900
- name: it_5
num_bytes: 82319403
num_examples: 21483
- name: it_6
num_bytes: 77655835
num_examples: 21535
- name: fi_0
num_bytes: 38406525
num_examples: 27185
- name: fi_1
num_bytes: 45852915
num_examples: 27178
- name: fi_2
num_bytes: 43964919
num_examples: 27179
- name: fi_3
num_bytes: 48780830
num_examples: 27184
- name: fi_4
num_bytes: 76447425
num_examples: 27109
- name: fi_5
num_bytes: 51991381
num_examples: 27170
- name: fi_6
num_bytes: 48559262
num_examples: 27153
- name: fa_0
num_bytes: 96243585
num_examples: 30906
- name: fa_1
num_bytes: 113502571
num_examples: 30784
- name: fa_2
num_bytes: 97058237
num_examples: 30894
- name: fa_3
num_bytes: 107038686
num_examples: 30851
- name: fa_4
num_bytes: 112125942
num_examples: 30822
- name: fa_5
num_bytes: 113077898
num_examples: 30767
- name: fa_6
num_bytes: 88091064
num_examples: 30932
- name: gd_0
num_bytes: 7335465
num_examples: 3537
- name: gd_1
num_bytes: 9467949
num_examples: 3530
- name: gd_2
num_bytes: 9689767
num_examples: 3528
- name: gd_3
num_bytes: 9926268
num_examples: 3525
- name: gd_4
num_bytes: 12713464
num_examples: 3465
- name: gd_5
num_bytes: 11546562
num_examples: 3499
- name: gd_6
num_bytes: 8709089
num_examples: 3534
- name: cy_0
num_bytes: 2373101
num_examples: 1111
- name: cy_1
num_bytes: 3082550
num_examples: 1111
- name: cy_2
num_bytes: 3112931
num_examples: 1111
- name: cy_3
num_bytes: 2934467
num_examples: 1111
- name: cy_4
num_bytes: 4784263
num_examples: 1111
- name: cy_5
num_bytes: 3757146
num_examples: 1111
- name: cy_6
num_bytes: 2757134
num_examples: 1111
- name: cs_0
num_bytes: 193204789
num_examples: 102111
- name: cs_1
num_bytes: 248532815
num_examples: 102085
- name: cs_2
num_bytes: 248265366
num_examples: 102085
- name: cs_3
num_bytes: 332530755
num_examples: 101916
- name: cs_4
num_bytes: 537663964
num_examples: 97317
- name: cs_5
num_bytes: 299610164
num_examples: 101990
- name: cs_6
num_bytes: 339589731
num_examples: 101777
- name: es_0
num_bytes: 71968866
num_examples: 28473
- name: es_1
num_bytes: 102260411
num_examples: 28443
- name: es_2
num_bytes: 109651662
num_examples: 28424
- name: es_3
num_bytes: 112979119
num_examples: 28404
- name: es_4
num_bytes: 163186080
num_examples: 27271
- name: es_5
num_bytes: 130959590
num_examples: 28317
- name: es_6
num_bytes: 119790214
num_examples: 28310
- name: zh_0
num_bytes: 23617606
num_examples: 7993
- name: zh_1
num_bytes: 32483372
num_examples: 7980
- name: zh_2
num_bytes: 29697463
num_examples: 7988
- name: zh_3
num_bytes: 28332743
num_examples: 7989
- name: zh_4
num_bytes: 27491845
num_examples: 7990
- name: zh_5
num_bytes: 35551944
num_examples: 7954
- name: zh_6
num_bytes: 26490384
num_examples: 7991
- name: no_0
num_bytes: 51325808
num_examples: 33282
- name: no_1
num_bytes: 67531367
num_examples: 33281
- name: no_2
num_bytes: 70471135
num_examples: 33281
- name: no_3
num_bytes: 61386787
num_examples: 33281
- name: no_4
num_bytes: 113337815
num_examples: 33227
- name: no_5
num_bytes: 84988095
num_examples: 33274
- name: no_6
num_bytes: 61386787
num_examples: 33281
- name: ga_0
num_bytes: 10164126
num_examples: 4000
- name: ga_1
num_bytes: 12904387
num_examples: 3995
- name: ga_2
num_bytes: 13000600
num_examples: 3995
- name: ga_3
num_bytes: 12458429
num_examples: 3996
- name: ga_4
num_bytes: 22263032
num_examples: 3924
- name: ga_5
num_bytes: 15711892
num_examples: 3980
- name: ga_6
num_bytes: 11531217
num_examples: 3996
- name: da_0
num_bytes: 7757634
num_examples: 4383
- name: da_1
num_bytes: 10310743
num_examples: 4383
- name: da_2
num_bytes: 10754121
num_examples: 4383
- name: da_3
num_bytes: 9369972
num_examples: 4383
- name: da_4
num_bytes: 17982417
num_examples: 4351
- name: da_5
num_bytes: 12936123
num_examples: 4378
- name: da_6
num_bytes: 9369972
num_examples: 4383
- name: cop_0
num_bytes: 7622435
num_examples: 1122
- name: cop_1
num_bytes: 7185677
num_examples: 972
- name: cop_2
num_bytes: 7618669
num_examples: 1143
- name: cop_3
num_bytes: 7622440
num_examples: 1145
- name: cop_4
num_bytes: 7298153
num_examples: 1011
- name: cop_5
num_bytes: 7482224
num_examples: 1084
- name: cop_6
num_bytes: 7630235
num_examples: 1174
- name: gv_0
num_bytes: 1200473
num_examples: 1172
- name: gv_1
num_bytes: 1567515
num_examples: 1172
- name: gv_2
num_bytes: 1599001
num_examples: 1172
- name: gv_3
num_bytes: 1424762
num_examples: 1172
- name: gv_4
num_bytes: 2042489
num_examples: 1171
- name: gv_5
num_bytes: 1881763
num_examples: 1170
- name: gv_6
num_bytes: 1424762
num_examples: 1172
download_size: 1339506450
dataset_size: 13867176061
---
# Dataset Card for "tokenized_udtrees_trunc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Chinese-Vicuna/guanaco_belle_merge_v1.0 | Chinese-Vicuna | 2023-03-30T07:49:30Z | 70 | 97 | [
"language:zh",
"language:en",
"language:ja",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-30T07:29:07Z | null | ---
license: gpl-3.0
language:
- zh
- en
- ja
---
Thanks for [Guanaco Dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) and [Belle Dataset](https://huggingface.co/datasets/BelleGroup/generated_train_0.5M_CN)
This dataset was created by merging the above two datasets in a certain format so that they can be used for training our code [Chinese-Vicuna](https://github.com/Facico/Chinese-Vicuna) |
HuggingFaceGECLM/REDDIT_submissions | HuggingFaceGECLM | 2023-03-17T07:44:37Z | 13,095 | 10 | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2001.08435",
"region:us",
"reddit",
"social-media"
] | [
"text-generation"
] | 2023-03-15T14:13:43Z | null | ---
dataset_info:
features:
- name: allow_live_comments
dtype: string
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: banned_by
dtype: string
- name: category
dtype: string
- name: content_categories
dtype: string
- name: contest_mode
dtype: string
- name: created_utc
dtype: string
- name: discussion_type
dtype: string
- name: distinguished
dtype: string
- name: domain
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: hidden
dtype: string
- name: hide_score
dtype: string
- name: id
dtype: string
- name: is_created_from_ads_ui
dtype: string
- name: is_crosspostable
dtype: string
- name: is_meta
dtype: string
- name: is_original_content
dtype: string
- name: is_reddit_media_domain
dtype: string
- name: is_robot_indexable
dtype: string
- name: is_self
dtype: string
- name: is_video
dtype: string
- name: locked
dtype: string
- name: media
dtype: string
- name: media_embed
dtype: string
- name: media_only
dtype: string
- name: name
dtype: string
- name: no_follow
dtype: string
- name: num_comments
dtype: string
- name: num_crossposts
dtype: string
- name: over_18
dtype: string
- name: parent_whitelist_status
dtype: string
- name: permalink
dtype: string
- name: pinned
dtype: string
- name: post_hint
dtype: string
- name: pwls
dtype: string
- name: quarantine
dtype: string
- name: removed_by
dtype: string
- name: removed_by_category
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: secure_media
dtype: string
- name: secure_media_embed
dtype: string
- name: selftext
dtype: string
- name: send_replies
dtype: string
- name: spoiler
dtype: string
- name: stickied
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_subscribers
dtype: string
- name: subreddit_type
dtype: string
- name: suggested_sort
dtype: string
- name: title
dtype: string
- name: top_awarded_type
dtype: string
- name: total_awards_received
dtype: string
- name: treatment_tags
dtype: string
- name: upvote_ratio
dtype: string
- name: url
dtype: string
- name: url_overridden_by_dest
dtype: string
- name: view_count
dtype: string
- name: whitelist_status
dtype: string
- name: wls
dtype: string
splits:
- name: tifu
num_bytes: 711926746
num_examples: 526283
- name: explainlikeimfive
num_bytes: 1407570925
num_examples: 1811324
- name: WritingPrompts
num_bytes: 883683696
num_examples: 1001358
- name: changemyview
num_bytes: 366049867
num_examples: 257332
- name: LifeProTips
num_bytes: 596724168
num_examples: 715494
- name: todayilearned
num_bytes: 1882122179
num_examples: 2153849
- name: science
num_bytes: 675817380
num_examples: 872768
- name: askscience
num_bytes: 1180347707
num_examples: 1562708
- name: ifyoulikeblank
num_bytes: 248876237
num_examples: 221368
- name: Foodforthought
num_bytes: 56817554
num_examples: 70647
- name: IWantToLearn
num_bytes: 97666128
num_examples: 103347
- name: bestof
num_bytes: 230879506
num_examples: 341029
- name: IAmA
num_bytes: 375534116
num_examples: 436003
- name: socialskills
num_bytes: 327412682
num_examples: 260354
- name: relationship_advice
num_bytes: 5050087947
num_examples: 3284961
- name: philosophy
num_bytes: 230221165
num_examples: 212792
- name: YouShouldKnow
num_bytes: 87706881
num_examples: 94635
- name: history
num_bytes: 295389153
num_examples: 284318
- name: books
num_bytes: 635450859
num_examples: 692807
- name: Showerthoughts
num_bytes: 4859309870
num_examples: 6358205
- name: personalfinance
num_bytes: 1813984142
num_examples: 1347837
- name: buildapc
num_bytes: 4754190700
num_examples: 3030207
- name: EatCheapAndHealthy
num_bytes: 95544413
num_examples: 79694
- name: boardgames
num_bytes: 379980593
num_examples: 287493
- name: malefashionadvice
num_bytes: 523741819
num_examples: 548587
- name: femalefashionadvice
num_bytes: 131338068
num_examples: 131110
- name: scifi
num_bytes: 148283250
num_examples: 134568
- name: Fantasy
num_bytes: 265612464
num_examples: 175866
- name: Games
num_bytes: 1112497898
num_examples: 830997
- name: bodyweightfitness
num_bytes: 154845910
num_examples: 144829
- name: SkincareAddiction
num_bytes: 908265410
num_examples: 890421
- name: podcasts
num_bytes: 114495922
num_examples: 113707
- name: suggestmeabook
num_bytes: 307022597
num_examples: 300601
- name: AskHistorians
num_bytes: 586939915
num_examples: 592242
- name: gaming
num_bytes: 7306865977
num_examples: 6418305
- name: DIY
num_bytes: 612049815
num_examples: 505769
- name: mildlyinteresting
num_bytes: 1497282377
num_examples: 1971187
- name: sports
num_bytes: 866461524
num_examples: 783890
- name: space
num_bytes: 413125181
num_examples: 415629
- name: gadgets
num_bytes: 242359652
num_examples: 284487
- name: Documentaries
num_bytes: 658519015
num_examples: 300935
- name: GetMotivated
num_bytes: 458864553
num_examples: 395894
- name: UpliftingNews
num_bytes: 294091853
num_examples: 285339
- name: technology
num_bytes: 1562501874
num_examples: 2112572
- name: Fitness
num_bytes: 939461866
num_examples: 1035109
- name: travel
num_bytes: 988622317
num_examples: 1012452
- name: lifehacks
num_bytes: 124628404
num_examples: 116871
- name: Damnthatsinteresting
num_bytes: 536680874
num_examples: 397143
- name: gardening
num_bytes: 652169745
num_examples: 723267
- name: programming
num_bytes: 455470198
num_examples: 571221
download_size: 15928530968
dataset_size: 49105493092
annotations_creators:
- no-annotation
language:
- en
language_creators:
- machine-generated
license: []
multilinguality:
- monolingual
pretty_name: Reddit submissions
size_categories:
- 1B<n<10B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_submissions"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Submissions of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These submissions can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "allow_live_comments", "archived", "author", "author_fullname", "banned_by", "category", "content_categories", "contest_mode", "created_utc", "discussion_type", "distinguished", "domain", "edited", "gilded", "hidden", "hide_score", "id", "is_created_from_ads_ui", "is_crosspostable", "is_meta", "is_original_content", "is_reddit_media_domain", "is_robot_indexable", "is_self", "is_video", "locked", "media", "media_embed", "media_only", "name", "no_follow", "num_comments", "num_crossposts", "over_18", "parent_whitelist_status", "permalink", "pinned", "post_hint", "pwls", "quarantine", "removed_by", "removed_by_category", "retrieved_on", "score", "secure_media", "secure_media_embed", "selftext", "send_replies", "spoiler", "stickied", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_subscribers", "subreddit_type", "suggested_sort", "title", "top_awarded_type", "total_awards_received", "treatment_tags", "upvote_ratio", "url", "url_overridden_by_dest", "view_count", "whitelist_status", "wls".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
Dahoas/rm-static | Dahoas | 2023-03-06T00:13:07Z | 3,435 | 120 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-12-22T16:50:14Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 113850006
num_examples: 76256
- name: test
num_bytes: 7649255
num_examples: 5103
download_size: 73006535
dataset_size: 121499261
---
# Dataset Card for "rm-static"
Split of [hh-static](https://huggingface.co/datasets/Dahoas/static-hh) used for training reward models after supervised fine-tuning. |
LeoCordoba/CC-NEWS-ES | LeoCordoba | 2023-02-23T21:53:55Z | 703 | 12 | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:cc-news",
"language:es",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"conditional-text-generation"
] | [
"summarization",
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
source_datasets:
- cc-news
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for CC-NEWS-ES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [CC-NEWS-ES dataset repository](https://huggingface.co/datasets/LeoCordoba/CC-NEWS-ES)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/)
### Dataset Summary
CC-NEWS-ES is a Spanish-language dataset of news. The corpus was generated by extracting the Spanish articles from CC-NEWS (news index of Common Crawl) of 2019. For doing that FastText model was used for language prediction.
It contains a total of 7,473,286 texts and 1,812,009,283 words distributed as follows:
|domain | texts | words |
|:----|-----------------:|-----------------:|
| ar | 532703 | 1.45127e+08 |
| bo | 29557 | 7.28996e+06 |
| br | 107 | 14207 |
| cl | 116661 | 3.34633e+07 |
| co | 78662 | 1.92649e+07 |
| com | 3650950 | 8.44094e+08 |
| cr | 16542 | 3.82075e+06 |
| es |1838790 | 4.82943e+08 |
| gt | 4833 | 838121 |
| hn | 36559 | 5.49933e+06 |
| mx | 724908 | 1.62198e+08 |
| ni | 40643 | 1.08501e+07 |
| pa | 18447 | 4.34724e+06 |
| pe | 230962 | 3.52123e+07 |
| pr | 7756 | 1.6633e+06 |
| py | 30651 | 2.08077e+07 |
| sv | 454 | 353145 |
| uy | 80948 | 2.72562e+07 |
| ve | 33148 | 6.96578e+06 |
### Supported Tasks and Leaderboards
TODO
-
### Languages
The text is in Spanish. The BCP-47 code for Spanish is es.
## Dataset Structure
### Data Instances
Each data instance contains the following features: ...
- country: top level domain, usually refers to a country (except in the case of .com).
- text: body of the news
- id: internal id
An example from CC-NEWS-ES looks like the following:
```
{'country': 'py',
'text': '“La que asumió es una mujer que está en línea de sucesión. La policía, ni los militares están en el Palacio, lo que ella dijo fue que no se podía seguir reprimiendo al pueblo", manifestó este jueves el senador colorado, Enrique Riera, sobre la asunción presidencial en Bolivia de la senadora opositora, Jeanine Áñez,Riera agregó que Evo Morales el que "escapó y abandonó" a su pueblo al ir como asilado a México. En ese sentido, dijo que irónicamente, el expresidente boliviano no eligió como destino a Venezuela, Nicaragua ni a Cuba.Sostuvo que nos de debe utilizar a las instituciones democráticas y republicanas para llegar al poder, cambiando Constituciones y prorrogando mandatos una y otra vez. “El amigo Morales no respetó absolutamente nada”, subrayó.Por otra parte, el senador colorado mencionó que los fiscales y jueces bolivianos deberían tener el "coraje" de investigar el origen de la riqueza de Morales.Habló también sobre la situación en Venezuela y mencionó que Nicolás Maduro no cae, porque "toda la FFAA está contaminada de narcotráfico". El hombre cuenta con orden de prisión en su país por los ilícitos de Tráfico de Drogas y Asociación Criminal, según el Consejo Nacional de Justicia del Brasil.La agente fiscal Liliana Denice Duarte, titular de la Unidad Fiscal Nº 1 de Presidente Franco, requirió la expulsión del extranjero y la jueza Carina Frutos Recalde, mediante Auto Interlocutorio (A.I.) N° 2.153, dio curso favorable al pedido del Ministerio Público. Esto considerando la alta expectativa de pena que tiene el supuesto delincuente en su país.La detención ...',
'id': 7328086}
Note: the text is shortened for simplicity.
```
### Data Fields
- ...
- ...
### Data Splits
...
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
Common Crawl: https://commoncrawl.org/
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
...
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This dataset is maintained by [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/) and was built with the help of [María Gaska](https://www.linkedin.com/in/mfgaska/).
### Licensing Information
[N/A]
### Citation Information
TODO
### Contributions
[N/A] |
GEM/xwikis | GEM | 2023-02-22T13:05:19Z | 7,718 | 4 | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583",
"region:us"
] | [
"summarization"
] | 2022-03-14T15:31:48Z | 1 | ---
annotations_creators:
- found
language_creators:
- unknown
language:
- de
- en
- fr
- cs
license:
- cc-by-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: xwikis
---
# Dataset Card for GEM/xwikis
## Dataset Description
- **Homepage:** https://github.com/lauhaide/clads
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2202.09583
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xwikis')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
#### website
[Github](https://github.com/lauhaide/clads)
#### paper
https://arxiv.org/abs/2202.09583
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/clads)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://arxiv.org/abs/2202.09583
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@InProceedings{clads-emnlp,
author = "Laura Perez-Beltrachini and Mirella Lapata",
title = "Models and Datasets for Cross-Lingual Summarisation",
booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
year = "2021",
address = "Punta Cana, Dominican Republic",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `English`, `French`, `Czech`, `Chinese`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
other
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
found
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The input documents have section structure information.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
|
Cohere/miracl-en-corpus-22-12 | Cohere | 2023-02-06T11:54:52Z | 12,044 | 2 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-retrieval"
] | 2023-02-02T23:21:21Z | null | ---
annotations_creators:
- expert-generated
language:
- en
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
allenai/prosocial-dialog | allenai | 2023-02-03T07:58:29Z | 265 | 112 | [
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"source_datasets:extended|social_bias_frames",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2205.12688",
"region:us",
"dialogue",
"dialogue safety",
"social norm",
"rules-of-thumb"
] | [
"conversational",
"text-classification"
] | 2022-10-30T04:24:12Z | null | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- machine-generated
license: cc-by-4.0
multilinguality:
- monolingual
pretty_name: ProsocialDialog
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- original
- extended|social_bias_frames
tags:
- dialogue
- dialogue safety
- social norm
- rules-of-thumb
task_categories:
- conversational
- text-classification
task_ids:
- dialogue-generation
- multi-class-classification
---
# Dataset Card for ProsocialDialog Dataset
## Dataset Description
- **Repository:** [Dataset and Model](https://github.com/skywalker023/prosocial-dialog)
- **Paper:** [ProsocialDialog: A Prosocial Backbone for Conversational Agents](https://aclanthology.org/2022.emnlp-main.267/)
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])
## Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
## Supported Tasks
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
## Languages
English
## Dataset Structure
### Data Attributes
attribute | type | description
--- | --- | ---
`context` | str | the potentially unsafe utterance
`response` | str | the guiding utterance grounded on rules-of-thumb (`rots`)
`rots` | list of str\|null | the relevant rules-of-thumb for `text` *not* labeled as \_\_casual\_\_
`safety_label` | str | the final verdict of the context according to `safety_annotations`: {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
`safety_annotations` | list of str | raw annotations from three workers: {casual, needs caution, needs intervention}
`safety_annotation_reasons` | list of str | the reasons behind the safety annotations in free-form text from each worker
`source` | str | the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics_amt, ethics_reddit}
`etc` | str\|null | other information
`dialogue_id` | int | the dialogue index
`response_id` | int | the response index
`episode_done` | bool | an indicator of whether it is the end of the dialogue
## Dataset Creation
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2205.12688).
## Additional Information
### Citation
Please cite our work if you found the resources in this repository useful:
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` |
indonlp/indonlu | indonlp | 2023-02-03T05:49:02Z | 373 | 34 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:closed-domain-qa",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:semantic-similarity-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:id",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:1809.03391",
"region:us",
"keyphrase-extraction",
"span-extraction",
"aspect-based-sentiment-analysis"
] | [
"question-answering",
"text-classification",
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- id
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- question-answering
- text-classification
- token-classification
task_ids:
- closed-domain-qa
- multi-class-classification
- named-entity-recognition
- part-of-speech
- semantic-similarity-classification
- sentiment-classification
paperswithcode_id: indonlu-benchmark
pretty_name: IndoNLU
configs:
- bapos
- casa
- emot
- facqa
- hoasa
- keps
- nergrit
- nerp
- posp
- smsa
- terma
- wrete
tags:
- keyphrase-extraction
- span-extraction
- aspect-based-sentiment-analysis
dataset_info:
- config_name: emot
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
0: sadness
1: anger
2: love
3: fear
4: happy
splits:
- name: train
num_bytes: 686418
num_examples: 3521
- name: validation
num_bytes: 84082
num_examples: 440
- name: test
num_bytes: 84856
num_examples: 440
download_size: 840917
dataset_size: 855356
- config_name: smsa
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: positive
1: neutral
2: negative
splits:
- name: train
num_bytes: 2209874
num_examples: 11000
- name: validation
num_bytes: 249629
num_examples: 1260
- name: test
num_bytes: 77041
num_examples: 500
download_size: 2509229
dataset_size: 2536544
- config_name: casa
features:
- name: sentence
dtype: string
- name: fuel
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
- name: machine
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
- name: others
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
- name: part
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
- name: price
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
- name: service
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
splits:
- name: train
num_bytes: 110415
num_examples: 810
- name: validation
num_bytes: 11993
num_examples: 90
- name: test
num_bytes: 23553
num_examples: 180
download_size: 144903
dataset_size: 145961
- config_name: hoasa
features:
- name: sentence
dtype: string
- name: ac
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: air_panas
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: bau
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: general
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: kebersihan
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: linen
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: service
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: sunrise_meal
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: tv
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
- name: wifi
dtype:
class_label:
names:
0: neg
1: neut
2: pos
3: neg_pos
splits:
- name: train
num_bytes: 458177
num_examples: 2283
- name: validation
num_bytes: 58248
num_examples: 285
- name: test
num_bytes: 56399
num_examples: 286
download_size: 477314
dataset_size: 572824
- config_name: wrete
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: category
dtype: string
- name: label
dtype:
class_label:
names:
0: NotEntail
1: Entail_or_Paraphrase
splits:
- name: train
num_bytes: 99999
num_examples: 300
- name: validation
num_bytes: 18049
num_examples: 50
- name: test
num_bytes: 32617
num_examples: 100
download_size: 151018
dataset_size: 150665
- config_name: posp
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
0: B-PPO
1: B-KUA
2: B-ADV
3: B-PRN
4: B-VBI
5: B-PAR
6: B-VBP
7: B-NNP
8: B-UNS
9: B-VBT
10: B-VBL
11: B-NNO
12: B-ADJ
13: B-PRR
14: B-PRK
15: B-CCN
16: B-$$$
17: B-ADK
18: B-ART
19: B-CSN
20: B-NUM
21: B-SYM
22: B-INT
23: B-NEG
24: B-PRI
25: B-VBE
splits:
- name: train
num_bytes: 2751348
num_examples: 6720
- name: validation
num_bytes: 343924
num_examples: 840
- name: test
num_bytes: 350720
num_examples: 840
download_size: 2407206
dataset_size: 3445992
- config_name: bapos
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
0: B-PR
1: B-CD
2: I-PR
3: B-SYM
4: B-JJ
5: B-DT
6: I-UH
7: I-NND
8: B-SC
9: I-WH
10: I-IN
11: I-NNP
12: I-VB
13: B-IN
14: B-NND
15: I-CD
16: I-JJ
17: I-X
18: B-OD
19: B-RP
20: B-RB
21: B-NNP
22: I-RB
23: I-Z
24: B-CC
25: B-NEG
26: B-VB
27: B-NN
28: B-MD
29: B-UH
30: I-NN
31: B-PRP
32: I-SC
33: B-Z
34: I-PRP
35: I-OD
36: I-SYM
37: B-WH
38: B-FW
39: I-CC
40: B-X
splits:
- name: train
num_bytes: 3772459
num_examples: 8000
- name: validation
num_bytes: 460058
num_examples: 1000
- name: test
num_bytes: 474368
num_examples: 1029
download_size: 3084021
dataset_size: 4706885
- config_name: terma
features:
- name: tokens
sequence: string
- name: seq_label
sequence:
class_label:
names:
0: I-SENTIMENT
1: O
2: I-ASPECT
3: B-SENTIMENT
4: B-ASPECT
splits:
- name: train
num_bytes: 817983
num_examples: 3000
- name: validation
num_bytes: 276335
num_examples: 1000
- name: test
num_bytes: 265922
num_examples: 1000
download_size: 816822
dataset_size: 1360240
- config_name: keps
features:
- name: tokens
sequence: string
- name: seq_label
sequence:
class_label:
names:
0: O
1: B
2: I
splits:
- name: train
num_bytes: 173961
num_examples: 800
- name: validation
num_bytes: 42961
num_examples: 200
- name: test
num_bytes: 66762
num_examples: 247
download_size: 134042
dataset_size: 283684
- config_name: nergrit
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: I-PERSON
1: B-ORGANISATION
2: I-ORGANISATION
3: B-PLACE
4: I-PLACE
5: O
6: B-PERSON
splits:
- name: train
num_bytes: 960710
num_examples: 1672
- name: validation
num_bytes: 119567
num_examples: 209
- name: test
num_bytes: 117274
num_examples: 209
download_size: 641265
dataset_size: 1197551
- config_name: nerp
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: I-PPL
1: B-EVT
2: B-PLC
3: I-IND
4: B-IND
5: B-FNB
6: I-EVT
7: B-PPL
8: I-PLC
9: O
10: I-FNB
splits:
- name: train
num_bytes: 2751348
num_examples: 6720
- name: validation
num_bytes: 343924
num_examples: 840
- name: test
num_bytes: 350720
num_examples: 840
download_size: 1725986
dataset_size: 3445992
- config_name: facqa
features:
- name: question
sequence: string
- name: passage
sequence: string
- name: seq_label
sequence:
class_label:
names:
0: O
1: B
2: I
splits:
- name: train
num_bytes: 2454368
num_examples: 2495
- name: validation
num_bytes: 306249
num_examples: 311
- name: test
num_bytes: 306831
num_examples: 311
download_size: 2591968
dataset_size: 3067448
---
# Dataset Card for IndoNLU
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [IndoNLU Website](https://www.indobenchmark.com/)
- **Repository:** [IndoNLU GitHub](https://github.com/indobenchmark/indonlu)
- **Paper:** [IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding](https://www.aclweb.org/anthology/2020aacl-main.85.pdf)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia (Indonesian language).
There are 12 datasets in IndoNLU benchmark for Indonesian natural language understanding.
1. `EmoT`: An emotion classification dataset collected from the social media platform Twitter. The dataset consists of around 4000 Indonesian colloquial language tweets, covering five different emotion labels: anger, fear, happy, love, and sadness
2. `SmSA`: This sentence-level sentiment analysis dataset is a collection of comments and reviews in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists to construct this dataset. There are three possible sentiments on the `SmSA` dataset: positive, negative, and neutral
3. `CASA`: An aspect-based sentiment analysis dataset consisting of around a thousand car reviews collected from multiple Indonesian online automobile platforms. The dataset covers six aspects of car quality. We define the task to be a multi-label classification task, where each label represents a sentiment for a single aspect with three possible values: positive, negative, and neutral.
4. `HoASA`: An aspect-based sentiment analysis dataset consisting of hotel reviews collected from the hotel aggregator platform, [AiryRooms](https://github.com/annisanurulazhar/absa-playground). The dataset covers ten different aspects of hotel quality. Similar to the `CASA` dataset, each review is labeled with a single sentiment label for each aspect. There are four possible sentiment classes for each sentiment label: positive, negative, neutral, and positive-negative. The positivenegative label is given to a review that contains multiple sentiments of the same aspect but for different objects (e.g., cleanliness of bed and toilet).
5. `WReTE`: The Wiki Revision Edits Textual Entailment dataset consists of 450 sentence pairs constructed from Wikipedia revision history. The dataset contains pairs of sentences and binary semantic relations between the pairs. The data are labeled as entailed when the meaning of the second sentence can be derived from the first one, and not entailed otherwise.
6. `POSP`: This Indonesian part-of-speech tagging (POS) dataset is collected from Indonesian news websites. The dataset consists of around 8000 sentences with 26 POS tags. The POS tag labels follow the [Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention](http://inacl.id/inacl/wp-content/uploads/2017/06/INACL-POS-Tagging-Convention-26-Mei.pdf).
7. `BaPOS`: This POS tagging dataset contains about 1000 sentences, collected from the [PAN Localization Project](http://www.panl10n.net/). In this dataset, each word is tagged by one of [23 POS tag classes](https://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf). Data splitting used in this benchmark follows the experimental setting used by [Kurniawan and Aji (2018)](https://arxiv.org/abs/1809.03391).
8. `TermA`: This span-extraction dataset is collected from the hotel aggregator platform, [AiryRooms](https://github.com/jordhy97/final_project). The dataset consists of thousands of hotel reviews, which each contain a span label for aspect and sentiment words representing the opinion of the reviewer on the corresponding aspect. The labels use Inside-Outside-Beginning (IOB) tagging representation with two kinds of tags, aspect and sentiment.
9. `KEPS`: This keyphrase extraction dataset consists of text from Twitter discussing banking products and services and is written in the Indonesian language. A phrase containing important information is considered a keyphrase. Text may contain one or more keyphrases since important phrases can be located at different positions. The dataset follows the IOB chunking format, which represents the position of the keyphrase.
10. `NERGrit`: This NER dataset is taken from the [Grit-ID repository](https://github.com/grit-id/nergrit-corpus), and the labels are spans in IOB chunking representation. The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and ORGANIZATION (name of organization).
11. `NERP`: This NER dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites. There are five labels available in this dataset, PER (name of person), LOC (name of location), IND (name of product or brand), EVT (name of the event), and FNB (name of food and beverage). Similar to the `TermA` dataset, the `NERP` dataset uses the IOB chunking format.
12. `FacQA`: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article. Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the corresponding short passage. There are six categories of questions: date, location, name, organization, person, and quantitative.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Indonesian
## Dataset Structure
### Data Instances
1. `EmoT` dataset
A data point consists of `tweet` and `label`. An example from the train set looks as follows:
```
{
'tweet': 'Ini adalah hal yang paling membahagiakan saat biasku foto bersama ELF #ReturnOfTheLittlePrince #HappyHeeChulDay'
'label': 4,
}
```
2. `SmSA` dataset
A data point consists of `text` and `label`. An example from the train set looks as follows:
```
{
'text': 'warung ini dimiliki oleh pengusaha pabrik tahu yang sudah puluhan tahun terkenal membuat tahu putih di bandung . tahu berkualitas , dipadu keahlian memasak , dipadu kretivitas , jadilah warung yang menyajikan menu utama berbahan tahu , ditambah menu umum lain seperti ayam . semuanya selera indonesia . harga cukup terjangkau . jangan lewatkan tahu bletoka nya , tidak kalah dengan yang asli dari tegal !'
'label': 0,
}
```
3. `CASA` dataset
A data point consists of `sentence` and multi-label `feature`, `machine`, `others`, `part`, `price`, and `service`. An example from the train set looks as follows:
```
{
'sentence': 'Saya memakai Honda Jazz GK5 tahun 2014 ( pertama meluncur ) . Mobil nya bagus dan enak sesuai moto nya menyenangkan untuk dikendarai',
'fuel': 1,
'machine': 1,
'others': 2,
'part': 1,
'price': 1,
'service': 1
}
```
4. `HoASA` dataset
A data point consists of `sentence` and multi-label `ac`, `air_panas`, `bau`, `general`, `kebersihan`, `linen`, `service`, `sunrise_meal`, `tv`, and `wifi`. An example from the train set looks as follows:
```
{
'sentence': 'kebersihan kurang...',
'ac': 1,
'air_panas': 1,
'bau': 1,
'general': 1,
'kebersihan': 0,
'linen': 1,
'service': 1,
'sunrise_meal': 1,
'tv': 1,
'wifi': 1
}
```
5. `WreTE` dataset
A data point consists of `premise`, `hypothesis`, `category`, and `label`. An example from the train set looks as follows:
```
{
'premise': 'Pada awalnya bangsa Israel hanya terdiri dari satu kelompok keluarga di antara banyak kelompok keluarga yang hidup di tanah Kanan pada abad 18 SM .',
'hypothesis': 'Pada awalnya bangsa Yahudi hanya terdiri dari satu kelompok keluarga di antara banyak kelompok keluarga yang hidup di tanah Kanan pada abad 18 SM .'
'category': 'menolak perubahan teks terakhir oleh istimewa kontribusi pengguna 141 109 98 87 141 109 98 87 dan mengembalikan revisi 6958053 oleh johnthorne',
'label': 0,
}
```
6. `POSP` dataset
A data point consists of `tokens` and `pos_tags`. An example from the train set looks as follows:
```
{
'tokens': ['kepala', 'dinas', 'tata', 'kota', 'manado', 'amos', 'kenda', 'menyatakan', 'tidak', 'tahu', '-', 'menahu', 'soal', 'pencabutan', 'baliho', '.', 'ia', 'enggan', 'berkomentar', 'banyak', 'karena', 'merasa', 'bukan', 'kewenangannya', '.'],
'pos_tags': [11, 6, 11, 11, 7, 7, 7, 9, 23, 4, 21, 9, 11, 11, 11, 21, 3, 2, 4, 1, 19, 9, 23, 11, 21]
}
```
7. `BaPOS` dataset
A data point consists of `tokens` and `pos_tags`. An example from the train set looks as follows:
```
{
'tokens': ['Kera', 'untuk', 'amankan', 'pesta', 'olahraga'],
'pos_tags': [27, 8, 26, 27, 30]
}
```
8. `TermA` dataset
A data point consists of `tokens` and `seq_label`. An example from the train set looks as follows:
```
{
'tokens': ['kamar', 'saya', 'ada', 'kendala', 'di', 'ac', 'tidak', 'berfungsi', 'optimal', '.', 'dan', 'juga', 'wifi', 'koneksi', 'kurang', 'stabil', '.'],
'seq_label': [1, 1, 1, 1, 1, 4, 3, 0, 0, 1, 1, 1, 4, 2, 3, 0, 1]
}
```
9. `KEPS` dataset
A data point consists of `tokens` and `seq_label`. An example from the train set looks as follows:
```
{
'tokens': ['Setelah', 'melalui', 'proses', 'telepon', 'yang', 'panjang', 'tutup', 'sudah', 'kartu', 'kredit', 'bca', 'Ribet'],
'seq_label': [0, 1, 1, 2, 0, 0, 1, 0, 1, 2, 2, 1]
}
```
10. `NERGrit` dataset
A data point consists of `tokens` and `ner_tags`. An example from the train set looks as follows:
```
{
'tokens': ['Kontribusinya', 'terhadap', 'industri', 'musik', 'telah', 'mengumpulkan', 'banyak', 'prestasi', 'termasuk', 'lima', 'Grammy', 'Awards', ',', 'serta', 'dua', 'belas', 'nominasi', ';', 'dua', 'Guinness', 'World', 'Records', ';', 'dan', 'penjualannya', 'diperkirakan', 'sekitar', '64', 'juta', 'rekaman', '.'],
'ner_tags': [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]}
```
11. `NERP` dataset
A data point consists of `tokens` and `ner_tags`. An example from the train set looks as follows:
```
{
'tokens': ['kepala', 'dinas', 'tata', 'kota', 'manado', 'amos', 'kenda', 'menyatakan', 'tidak', 'tahu', '-', 'menahu', 'soal', 'pencabutan', 'baliho', '.', 'ia', 'enggan', 'berkomentar', 'banyak', 'karena', 'merasa', 'bukan', 'kewenangannya', '.'],
'ner_tags': [9, 9, 9, 9, 2, 7, 0, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
}
```
12. `FacQA` dataset
A data point consists of `question`, `passage`, and `seq_label`. An example from the train set looks as follows:
```
{
'passage': ['Lewat', 'telepon', 'ke', 'kantor', 'berita', 'lokal', 'Current', 'News', 'Service', ',', 'Hezb-ul', 'Mujahedeen', ',', 'kelompok', 'militan', 'Kashmir', 'yang', 'terbesar', ',', 'menyatakan', 'bertanggung', 'jawab', 'atas', 'ledakan', 'di', 'Srinagar', '.'],
'question': ['Kelompok', 'apakah', 'yang', 'menyatakan', 'bertanggung', 'jawab', 'atas', 'ledakan', 'di', 'Srinagar', '?'],
'seq_label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
1. `EmoT` dataset
- `tweet`: a `string` feature.
- `label`: an emotion label, with possible values including `sadness`, `anger`, `love`, `fear`, `happy`.
2. `SmSA` dataset
- `text`: a `string` feature.
- `label`: a sentiment label, with possible values including `positive`, `neutral`, `negative`.
3. `CASA` dataset
- `sentence`: a `string` feature.
- `fuel`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
- `machine`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
- `others`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
- `part`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
- `price`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
- `service`: a sentiment label, with possible values including `negative`, `neutral`, `positive`.
4. `HoASA` dataset
- `sentence`: a `string` feature.
- `ac`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `air_panas`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `bau`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `general`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `kebersihan`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `linen`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `service`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `sunrise_meal`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `tv`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
- `wifi`: a sentiment label, with possible values including `neg`, `neut`, `pos`, `neg_pos`.
5. `WReTE` dataset
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `category`: a `string` feature.
- `label`: a classification label, with possible values including `NotEntail`, `Entail_or_Paraphrase`.
6. `POSP` dataset
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of POS tag labels, with possible values including `B-PPO`, `B-KUA`, `B-ADV`, `B-PRN`, `B-VBI`.
The POS tag labels follow the [Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention](http://inacl.id/inacl/wp-content/uploads/2017/06/INACLPOS-Tagging-Convention-26-Mei.pdf).
7. `BaPOS` dataset
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of POS tag labels, with possible values including `B-PR`, `B-CD`, `I-PR`, `B-SYM`, `B-JJ`.
The POS tag labels from [Tagset UI](https://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf).
8. `TermA` dataset
- `tokens`: a `list` of `string` features.
- `seq_label`: a `list` of classification labels, with possible values including `I-SENTIMENT`, `O`, `I-ASPECT`, `B-SENTIMENT`, `B-ASPECT`.
9. `KEPS` dataset
- `tokens`: a `list` of `string` features.
- `seq_label`: a `list` of classification labels, with possible values including `O`, `B`, `I`.
The labels use Inside-Outside-Beginning (IOB) tagging.
10. `NERGrit` dataset
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of NER tag labels, with possible values including `I-PERSON`, `B-ORGANISATION`, `I-ORGANISATION`, `B-PLACE`, `I-PLACE`.
The labels use Inside-Outside-Beginning (IOB) tagging.
11. `NERP` dataset
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of NER tag labels, with possible values including `I-PPL`, `B-EVT`, `B-PLC`, `I-IND`, `B-IND`.
12. `FacQA` dataset
- `question`: a `list` of `string` features.
- `passage`: a `list` of `string` features.
- `seq_label`: a `list` of classification labels, with possible values including `O`, `B`, `I`.
### Data Splits
The data is split into a training, validation and test set.
| | dataset | Train | Valid | Test |
|----|---------|-------|-------|------|
| 1 | EmoT | 3521 | 440 | 440 |
| 2 | SmSA | 11000 | 1260 | 500 |
| 3 | CASA | 810 | 90 | 180 |
| 4 | HoASA | 2283 | 285 | 286 |
| 5 | WReTE | 300 | 50 | 100 |
| 6 | POSP | 6720 | 840 | 840 |
| 7 | BaPOS | 8000 | 1000 | 1029 |
| 8 | TermA | 3000 | 1000 | 1000 |
| 9 | KEPS | 800 | 200 | 247 |
| 10 | NERGrit | 1672 | 209 | 209 |
| 11 | NERP | 6720 | 840 | 840 |
| 12 | FacQA | 2495 | 311 | 311 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The licensing status of the IndoNLU benchmark datasets is under MIT License.
### Citation Information
IndoNLU citation
```
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
`EmoT` dataset citation
```
@inproceedings{saputri2018emotion,
title={Emotion Classification on Indonesian Twitter Dataset},
author={Mei Silviana Saputri, Rahmad Mahendra, and Mirna Adriani},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},
pages={90--95},
year={2018},
organization={IEEE}
}
```
`SmSA` dataset citation
```
@inproceedings{purwarianti2019improving,
title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},
author={Ayu Purwarianti and Ida Ayu Putu Ari Crisdayanti},
booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
pages={1--5},
year={2019},
organization={IEEE}
}
```
`CASA` dataset citation
```
@inproceedings{ilmania2018aspect,
title={Aspect Detection and Sentiment Classification Using Deep Neural Network for Indonesian Aspect-based Sentiment Analysis},
author={Arfinda Ilmania, Abdurrahman, Samuel Cahyawijaya, Ayu Purwarianti},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing(IALP)},
pages={62--67},
year={2018},
organization={IEEE}
}
```
`HoASA` dataset citation
```
@inproceedings{azhar2019multi,
title={Multi-label Aspect Categorization with Convolutional Neural Networks and Extreme Gradient Boosting},
author={A. N. Azhar, M. L. Khodra, and A. P. Sutiono}
booktitle={Proceedings of the 2019 International Conference on Electrical Engineering and Informatics (ICEEI)},
pages={35--40},
year={2019}
}
```
`WReTE` dataset citation
```
@inproceedings{setya2018semi,
title={Semi-supervised Textual Entailment on Indonesian Wikipedia Data},
author={Ken Nabila Setya and Rahmad Mahendra},
booktitle={Proceedings of the 2018 International Conference on Computational Linguistics and Intelligent Text Processing (CICLing)},
year={2018}
}
```
`POSP` dataset citation
```
@inproceedings{hoesen2018investigating,
title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
author={Devin Hoesen and Ayu Purwarianti},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
```
`BaPOS` dataset citation
```
@inproceedings{dinakaramani2014designing,
title={Designing an Indonesian Part of Speech Tagset and Manually Tagged Indonesian Corpus},
author={Arawinda Dinakaramani, Fam Rashel, Andry Luthfi, and Ruli Manurung},
booktitle={Proceedings of the 2014 International Conference on Asian Language Processing (IALP)},
pages={66--69},
year={2014},
organization={IEEE}
}
@inproceedings{kurniawan2018toward,
title={Toward a Standardized and More Accurate Indonesian Part-of-Speech Tagging},
author={Kemal Kurniawan and Alham Fikri Aji},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
pages={303--307},
year={2018},
organization={IEEE}
}
```
`TermA` dataset citation
```
@article{winatmoko2019aspect,
title={Aspect and Opinion Term Extraction for Hotel Reviews Using Transfer Learning and Auxiliary Labels},
author={Yosef Ardhito Winatmoko, Ali Akbar Septiandri, Arie Pratama Sutiono},
journal={arXiv preprint arXiv:1909.11879},
year={2019}
}
@article{fernando2019aspect,
title={Aspect and Opinion Terms Extraction Using Double Embeddings and Attention Mechanism for Indonesian Hotel Reviews},
author={Jordhy Fernando, Masayu Leylia Khodra, Ali Akbar Septiandri},
journal={arXiv preprint arXiv:1908.04899},
year={2019}
}
```
`KEPS` dataset citation
```
@inproceedings{mahfuzh2019improving,
title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features},
author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti},
booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
pages={1--6},
year={2019},
organization={IEEE}
}
```
`NERGrit` dataset citation
```
@online{nergrit2019,
title={NERGrit Corpus},
author={NERGrit Developers},
year={2019},
url={https://github.com/grit-id/nergrit-corpus}
}
```
`NERP` dataset citation
```
@inproceedings{hoesen2018investigating,
title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
author={Devin Hoesen and Ayu Purwarianti},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
```
`FacQA` dataset citation
```
@inproceedings{purwarianti2007machine,
title={A Machine Learning Approach for Indonesian Question Answering System},
author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa},
booktitle={Proceedings of Artificial Intelligence and Applications },
pages={573--578},
year={2007}
}
```
### Contributions
Thanks to [@yasirabd](https://github.com/yasirabd) for adding this dataset. |
GBaker/MedQA-USMLE-4-options | GBaker | 2023-01-24T19:18:09Z | 2,121 | 57 | [
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-01-24T19:08:56Z | 2 | ---
license: cc-by-4.0
language:
- en
---
Original dataset introduced by Jin et al. in [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://paperswithcode.com/paper/what-disease-does-this-patient-have-a-large)
<h4>Citation information:</h4>
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
|
ola13/c4-clusters | ola13 | 2023-01-20T13:22:45Z | 10,474 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-01-18T17:17:57Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: meta
struct:
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: domain
dtype: 'null'
- name: perplexity
dtype: float64
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 1061375955254
num_examples: 364868892
download_size: 137201241092
dataset_size: 1061375955254
---
# Dataset Card for "c4-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edinburghcstr/ami | edinburghcstr | 2023-01-16T18:11:05Z | 2,254 | 49 | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1906.11047",
"region:us"
] | [
"automatic-speech-recognition"
] | 2022-08-17T22:02:08Z | 2 | ---
annotations_creators: []
language:
- en
language_creators: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: AMI
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
---
# Dataset Card for AMI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Terms of Usage](#terms-of-usage)
## Dataset Description
- **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/
- **Repository:** https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [[email protected]](mailto:[email protected])
## Dataset Description
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
the participants also have unsynchronized pens available to them that record what is written. The meetings
were recorded in English using three different rooms with different acoustic properties, and include mostly
non-native speakers.
**Note**: This dataset corresponds to the data-processing of [KALDI's AMI S5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5).
This means text is normalized and the audio data is chunked according to the scripts above!
To make the user experience as simply as possible, we provide the already chunked data to the user here so that the following can be done:
### Example Usage
```python
from datasets import load_dataset
ds = load_dataset("edinburghcstr/ami", "ihm")
print(ds)
```
gives:
```
DatasetDict({
train: Dataset({
features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
num_rows: 108502
})
validation: Dataset({
features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
num_rows: 13098
})
test: Dataset({
features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
num_rows: 12643
})
})
```
```py
ds["train"][0]
```
automatically loads the audio into memory:
```
{'meeting_id': 'EN2001a',
'audio_id': 'AMI_EN2001a_H00_MEE068_0000557_0000594',
'text': 'OKAY',
'audio': {'path': '/cache/dir/path/downloads/extracted/2d75d5b3e8a91f44692e2973f08b4cac53698f92c2567bd43b41d19c313a5280/EN2001a/train_ami_en2001a_h00_mee068_0000557_0000594.wav',
'array': array([0. , 0. , 0. , ..., 0.00033569, 0.00030518,
0.00030518], dtype=float32),
'sampling_rate': 16000},
'begin_time': 5.570000171661377,
'end_time': 5.940000057220459,
'microphone_id': 'H00',
'speaker_id': 'MEE068'}
```
The dataset was tested for correctness by fine-tuning a Wav2Vec2-Large model on it, more explicitly [the `wav2vec2-large-lv60` checkpoint](https://huggingface.co/facebook/wav2vec2-large-lv60).
As can be seen in this experiments, training the model for less than 2 epochs gives
*Result (WER)*:
| "dev" | "eval" |
|---|---|
| 25.27 | 25.21 |
as can be seen [here](https://huggingface.co/patrickvonplaten/ami-wav2vec2-large-lv60).
The results are in-line with results of published papers:
- [*Hybrid acoustic models for distant and multichannel large vocabulary speech recognition*](https://www.researchgate.net/publication/258075865_Hybrid_acoustic_models_for_distant_and_multichannel_large_vocabulary_speech_recognition)
- [Multi-Span Acoustic Modelling using Raw Waveform Signals](https://arxiv.org/abs/1906.11047)
You can run [run.sh](https://huggingface.co/patrickvonplaten/ami-wav2vec2-large-lv60/blob/main/run.sh) to reproduce the result.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
#### Transcribed Subsets Size
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to [@sanchit-gandhi](https://github.com/sanchit-gandhi), [@patrickvonplaten](https://github.com/patrickvonplaten),
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
## Terms of Usage
|
allenai/soda | allenai | 2023-01-04T09:24:32Z | 568 | 142 | [
"task_ids:dialogue-generation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"source_datasets:extended|Atomic10x",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2212.10465",
"region:us",
"dialogue",
"narrative",
"commonsense"
] | [
"conversational"
] | 2023-01-04T08:51:53Z | null | ---
language:
- en
language_creators:
- machine-generated
annotation_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: SODA
size_categories:
- 1M<n<10M
splits:
- name: train
num_examples: 1191582
- name: valid
num_examples: 146346
- name: test
num_examples: 148968
dataset_size: 1486896
source_datasets:
- original
- extended|Atomic10x
tags:
- dialogue
- narrative
- commonsense
task_categories:
- conversational
task_ids:
- dialogue-generation
---
# Dataset Card for 🥤SODA
## Dataset Description
- **Repository:** [Code](https://github.com/skywalker023/sodaverse)
- **Paper:** [SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization](https://arxiv.org/abs/2212.10465)
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])
## Dataset Summary
🥤SODA is the first publicly available, million-scale, high-quality dialogue dataset covering a wide range of social interactions. Dialogues are distilled from a PLM (InstructGPT; Ouyang et al., 2022) by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets – e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). Also, since social commonsense knowledge encompasses emotional reactions (i.e., the xReact `relation`), SODA includes 385K conversations labeled with 1.7K unique emotions along with information about the experiencer and the cause – i.e., `PersonX` and the `head` event in the symbolic commonsense knowledge triple.
## Languages
English
## Dataset Structure
field | type | description
--- | --- | ---
`head` | str | the head event in the symbolic commonsense knowledge triple
`relation` | str | the relationship between `head` and `tail` events
`tail` | str | the tail event in the symbolic commonsense knowledge triple
`literal` | str | the symbolic commonsense knowledge in sentence-form
`narrative` | str | narrative based on the `literal`
`dialogue` | list of str | dialogue grounded in the `narrative`
`speakers` | list of str | the speakers for each turn in the `dialogue`
`PersonX` | str | the assigned name for PersonX in the commonsense knowledge triple
`PersonY` | str\|null | the assigned name for PersonY in the commonsense knowledge triple
`PersonZ` | str\|null | the assigned name for PersonZ in the commonsense knowledge triple
`original_index` | int | the original index from Atomic10x
`split` | str | the split information: {train, valid, test}
`head_answer` | str | the answer for whether the `head` is included in the `narrative`: {Yes, Unknown}
`pmi_head_answer` | str | the answer for whether the `head` is included in the `narrative` with point-wise mutual information applied: {Yes, No, Unknown}
`relation_tail_answer` | str | the answer for whether the `relation`-`tail` is included in the `dialogue`: {Yes, No, Unknown}
`pmi_relation_tail_answer` | str | the answer for whether the `relation`-`tail` is included in the `dialogue` with point-wise mutual information applied: {Yes, No, Unknown}
## Dataset Creation
To create 🥤SODA, we distill dialogues from InstructGPT by contextualizing social commonsense knowledge – i.e., adding context information in multiple steps: (1) Retrieve social commonsense from the symbolic commonsense knowledge graph, (2) convert it into sentence form, (3) generate a narrative from the sentence, (4) infer the speakers from the narrative, and finally (5) derive contentful conversation grounded in the narrative and speakers. Anchoring the PLM in commonsense knowledge for deriving conversations offers two key advantages: (1) minimizing nonsensical conversations and (2) maximizing diversity. For more details, please refer to our [paper](https://arxiv.org/abs/2212.10465).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2212.10465).
## Trained Model
Using 🥤SODA, we train 🧑🏻🚀COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. COSMO-3B is available [here](https://huggingface.co/allenai/cosmo-xl)!
## Additional Information
For a brief summary of our paper, please see this [tweet](https://twitter.com/hyunw__kim/status/1605400305126248448).
### Citation
Please cite our work if you find the resources in this repository useful:
```
@article{kim2022soda,
title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
journal={ArXiv},
year={2022},
volume={abs/2212.10465}
}
``` |
qanastek/MASSIVE | qanastek | 2022-12-23T21:28:08Z | 211 | 24 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:lv",
"language:ml",
"language:mn",
"language:ms",
"language:my",
"language:nb",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:sq",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"size_categories:100K<n<1M",
"arxiv:2204.08582",
"region:us"
] | [
"text-classification"
] | 2022-04-23T16:23:09Z | 1 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- af
- am
- ar
- az
- bn
- cy
- da
- de
- el
- en
- es
- fa
- fi
- fr
- he
- hi
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- km
- kn
- ko
- lv
- ml
- mn
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sl
- sq
- sv
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- zh
- zh
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- named-entity-recognition
pretty_name: MASSIVE
language_bcp47:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
---
# MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE | 51 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The corpora consists of parallel sentences from 51 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/MASSIVE", "en-US", split='train')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "1",
"locale": "fr-FR",
"partition": "train",
"scenario": 16,
"intent": 48,
"utt": "réveille-moi à neuf heures du matin le vendredi",
"annot_utt": "réveille-moi à [time : neuf heures du matin] le [date : vendredi]",
"tokens": [
"réveille-moi",
"à",
"neuf",
"heures",
"du",
"matin",
"le",
"vendredi"
],
"ner_tags": [0, 0, 71, 6, 6, 6, 0, 14],
"worker_id": "22",
"slot_method": {
"slot": ["time", "date"],
"method": ["translation", "translation"]
},
"judgments": {
"worker_id": ["11", "22", "0"],
"intent_score": [2, 1, 1],
"slots_score": [1, 1, 1],
"grammar_score": [3, 4, 4],
"spelling_score": [2, 2, 2],
"language_identification": ["target", "target", "target"]
}
}
```
### Data Fields (taken from Alexa Github)
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Amazon Alexa.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following paper when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
|
gamino/wiki_medical_terms | gamino | 2022-12-20T16:23:58Z | 804 | 90 | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"language:en",
"license:gpl-3.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"medical",
"conditions"
] | [
"text-classification"
] | 2022-12-20T15:25:02Z | null | ---
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- gpl-3.0
multilinguality: []
pretty_name: Medical terms and their wikipedia text
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- medical
- conditions
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
### Dataset Summary
This data set contains over 6,000 medical terms and their wikipedia text. It is intended to be used on a downstream task that requires medical terms and their wikipedia explanation.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
### Citation Information
[More Information Needed]
|
openai/webgpt_comparisons | openai | 2022-12-19T17:55:29Z | 590 | 231 | [
"size_categories:10K<n<100K",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2112.09332",
"region:us"
] | [] | 2022-12-18T19:56:41Z | null | ---
pretty_name: WebGPT Comparisons
---
# Dataset Card for WebGPT Comparisons
## Dataset Description
In the [WebGPT paper](https://arxiv.org/abs/2112.09332), the authors trained a reward model from human feedback.
They used the reward model to train a long form question answering model to align with human preferences.
This is the dataset of all comparisons that were marked as suitable for reward modeling by the end of the WebGPT project.
There are 19,578 comparisons in total.
Each example in the dataset contains a pair of model answers for a question, and the associated metadata.
Each answer has a preference score from humans that can be used to determine which of the two answers are better.
Overall, an example has the following fields:
* `question`: The text of the question, together with the name of the dataset from which it was taken and a unique ID.
* `quotes_0`: The extracts that the model found while browsing for `answer_0`, together with the title of the page on which the extract was found, constructed from the HTML title and domain name of the page.
* `answer_0`: The final answer that the model composed using `quotes_0`.
* `tokens_0`: The prefix that would have been given to the model in the final step of the episode to create `answer_0`, and the completion given by the model or human. The prefix is made up of the question and the quotes, with some truncation, and the completion is simply the answer. Both are tokenized using the GPT-2 tokenizer. The concatenation of the prefix and completion is the input used for reward modeling.
* `score_0`: The strength of the preference for `answer_0` over `answer_1` as a number from −1 to 1. It sums to 0 with `score_1`, and an answer is preferred if and only if its score is positive. For reward modeling, we treat scores of 0 as soft 50% labels, and all other scores as hard labels (using only their sign).
* `quotes_1`: The counterpart to `quotes_0`.
* `answer_1`: The counterpart to `answer_0`.
* `tokens_1`: The counterpart to `tokens_0`.
* `score_1`: The counterpart to `score_0`.
This information was found in Appendix K of the WebGPT paper.
## Citation Information
[https://arxiv.org/abs/2112.09332](https://arxiv.org/abs/2112.09332)
```
@inproceedings{nakano2021webgpt,
author = {Reiichiro Nakano and Jacob Hilton and Suchir Balaji and Jeff Wu and Long Ouyang and Christina Kim and Christopher Hesse and Shantanu Jain and Vineet Kosaraju and William Saunders and Xu Jiang and Karl Cobbe and Tyna Eloundou and Gretchen Krueger and Kevin Button and Matthew Knight and Benjamin Chess and John Schulman},
title = {WebGPT: Browser-assisted question-answering with human feedback},
booktitle = {arXiv},
year = 2021,
}
```
Dataset added to the Hugging Face Hub by [@Tristan](https://huggingface.co/Tristan) and [@natolambert](https://huggingface.co/natolambert) |
EleutherAI/lambada_openai | EleutherAI | 2022-12-16T19:53:23Z | 239,892 | 42 | [
"task_ids:language-modeling",
"language_creators:machine-generated",
"multilinguality:translation",
"source_datasets:lambada",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"license:mit",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2022-12-16T16:35:07Z | null | ---
pretty_name: LAMBADA OpenAI
language_creators:
- machine-generated
license: mit
multilinguality:
- translation
task_ids:
- language-modeling
source_datasets:
- lambada
size_categories:
- 1K<n<10K
language:
- de
- en
- es
- fr
- it
dataset_info:
- config_name: default
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1709449
num_examples: 5153
download_size: 1819752
dataset_size: 1709449
- config_name: de
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1904576
num_examples: 5153
download_size: 1985231
dataset_size: 1904576
- config_name: en
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1709449
num_examples: 5153
download_size: 1819752
dataset_size: 1709449
- config_name: es
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1821735
num_examples: 5153
download_size: 1902349
dataset_size: 1821735
- config_name: fr
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1948795
num_examples: 5153
download_size: 2028703
dataset_size: 1948795
- config_name: it
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1813420
num_examples: 5153
download_size: 1894613
dataset_size: 1813420
---
## Dataset Description
- **Repository:** [openai/gpt2](https://github.com/openai/gpt-2)
- **Paper:** Radford et al. [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
### Dataset Summary
This dataset is comprised of the LAMBADA test split as pre-processed by OpenAI (see relevant discussions [here](https://github.com/openai/gpt-2/issues/131#issuecomment-497136199) and [here](https://github.com/huggingface/transformers/issues/491)). It also contains machine translated versions of the split in German, Spanish, French, and Italian.
LAMBADA is used to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative texts sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole text, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse.
### Languages
English, German, Spanish, French, and Italian.
### Source Data
For non-English languages, the data splits were produced by Google Translate. See the [`translation_script.py`](translation_script.py) for more details.
## Additional Information
### Hash Checksums
For data integrity checks we leave the following checksums for the files in this dataset:
| File Name | Checksum (SHA-256) |
|--------------------------------------------------------------------------|------------------------------------------------------------------|
| lambada_test_de.jsonl | 51c6c1795894c46e88e4c104b5667f488efe79081fb34d746b82b8caa663865e |
| [openai/lambada_test.jsonl](https://openaipublic.blob.core.windows.net/gpt-2/data/lambada_test.jsonl) | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_en.jsonl | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_es.jsonl | ffd760026c647fb43c67ce1bc56fd527937304b348712dce33190ea6caba6f9c |
| lambada_test_fr.jsonl | 941ec6a73dba7dc91c860bf493eb66a527cd430148827a4753a4535a046bf362 |
| lambada_test_it.jsonl | 86654237716702ab74f42855ae5a78455c1b0e50054a4593fb9c6fcf7fad0850 |
### Licensing
License: [Modified MIT](https://github.com/openai/gpt-2/blob/master/LICENSE)
### Citation
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
```bibtex
@misc{
author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},
title={The LAMBADA dataset},
DOI={10.5281/zenodo.2630551},
publisher={Zenodo},
year={2016},
month={Aug}
}
```
### Contributions
Thanks to Sid Black ([@sdtblck](https://github.com/sdtblck)) for translating the `lambada_openai` dataset into the non-English languages.
Thanks to Jonathan Tow ([@jon-tow](https://github.com/jon-tow)) for adding this dataset.
|
nthngdy/oscar-mini | nthngdy | 2022-12-06T11:05:51Z | 224 | 6 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"region:us"
] | [
"text-generation"
] | 2022-03-09T14:18:51Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
kmfoda/booksum | kmfoda | 2022-11-30T12:03:43Z | 1,407 | 59 | [
"license:bsd-3-clause",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2105.08209",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
license:
- bsd-3-clause
train-eval-index:
- config: kmfoda--booksum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
---
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). |
Yale-LILY/dart | Yale-LILY | 2022-11-18T19:57:00Z | 543 | 6 | [
"task_categories:tabular-to-text",
"task_ids:rdf-to-text",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:extended|wikitable_questions",
"source_datasets:extended|wikisql",
"source_datasets:extended|web_nlg",
"source_datasets:extended|cleaned_e2e",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2007.02871",
"region:us"
] | [
"tabular-to-text"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikitable_questions
- extended|wikisql
- extended|web_nlg
- extended|cleaned_e2e
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswithcode_id: dart
pretty_name: DART
dataset_info:
features:
- name: tripleset
sequence:
sequence: string
- name: subtree_was_extended
dtype: bool
- name: annotations
sequence:
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12966443
num_examples: 30526
- name: validation
num_bytes: 1458106
num_examples: 2768
- name: test
num_bytes: 2657644
num_examples: 5097
download_size: 29939366
dataset_size: 17082193
---
# Dataset Card for DART
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepahe](https://github.com/Yale-LILY/dart)
- **Repository:** [github](https://github.com/Yale-LILY/dart)
- **Paper:** [paper](https://arxiv.org/abs/2007.02871)
- **Leaderboard:** [leaderboard](https://github.com/Yale-LILY/dart#leaderboard)
### Dataset Summary
DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.
### Supported Tasks and Leaderboards
The task associated to DART is text generation from data records that are RDF triplets:
- `rdf-to-text`: The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [METEOR](https://huggingface.co/metrics/meteor), [BLEURT](https://huggingface.co/metrics/bleurt), [TER](https://huggingface.co/metrics/ter), [MoverScore](https://huggingface.co/metrics/mover_score), and [BERTScore](https://huggingface.co/metrics/bert_score). The ([BART-large model](https://huggingface.co/facebook/bart-large) from [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following scores:
| | BLEU | METEOR | TER | MoverScore | BERTScore | BLEURT |
| ----- | ----- | ------ | ---- | ----------- | ---------- | ------ |
| BART | 37.06 | 0.36 | 0.57 | 0.44 | 0.92 | 0.22 |
This task has an active leaderboard which can be found [here](https://github.com/Yale-LILY/dart#leaderboard) and ranks models based on the above metrics while also reporting.
### Languages
The dataset is in english (en).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{'annotations': {'source': ['WikiTableQuestions_mturk'],
'text': ['First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville']},
'subtree_was_extended': False,
'tripleset': [['First Clearing', 'LOCATION', 'On NYS 52 1 Mi. Youngsville'],
['On NYS 52 1 Mi. Youngsville', 'CITY_OR_TOWN', 'Callicoon, New York']]}
```
It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.
### Data Fields
The different fields are:
- `annotations`:
- `text`: list of text descriptions of the triplets
- `source`: list of sources of the RDF triplets (WikiTable, e2e, etc.)
- `subtree_was_extended`: boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to `None`
- `tripleset`: RDF triplets as a list of triplets of strings (subject, predicate, object)
### Data Splits
There are three splits, train, validation and test:
| | train | validation | test |
| ----- |------:|-----------:|-----:|
| N. Examples | 30526 | 2768 | 6959 |
## Dataset Creation
### Curation Rationale
Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.
### Source Data
DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.
#### Initial Data Collection and Normalization
DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables
from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
#### Who are the source language producers?
[More Information Needed]
### Annotations
DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables
from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
#### Annotation process
The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table.
First, internal skilled annotators denote the parent column for each column header.
Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Under MIT license (see [here](https://github.com/Yale-LILY/dart/blob/master/LICENSE))
### Citation Information
```
@article{radev2020dart,
title={DART: Open-Domain Structured Data Record to Text Generation},
author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},
journal={arXiv preprint arXiv:2007.02871},
year={2020}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
PlanTL-GOB-ES/cantemist-ner | PlanTL-GOB-ES | 2022-11-18T12:08:17Z | 105 | 8 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"biomedical",
"clinical",
"spanish"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
license:
- cc-by-4.0
---
# CANTEMIST
## Dataset Description
Manually classified collection of Spanish oncological clinical case reports.
- **Homepage:** [zenodo](https://zenodo.org/record/3978041)
- **Paper:** [Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results](https://www.researchgate.net/profile/Antonio-Miranda-Escalada-2/publication/352786464_Named_Entity_Recognition_Concept_Normalization_and_Clinical_Coding_Overview_of_the_Cantemist_Track_for_Cancer_Text_Mining_in_Spanish_Corpus_Guidelines_Methods_and_Results/links/60d98a3b458515d6fbe382d8/Named-Entity-Recognition-Concept-Normalization-and-Clinical-Coding-Overview-of-the-Cantemist-Track-for-Cancer-Text-Mining-in-Spanish-Corpus-Guidelines-Methods-and-Results.pdf)
- **Point of Contact:** [email protected]
### Dataset Summary
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The training subset contains 501 documents, the development subsets 500, and the test subset 300. The original dataset is distributed in [Brat](https://brat.nlplab.org/standoff.html) format.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For further information, please visit [the official website](https://temu.bsc.es/cantemist/).
### Supported Tasks
Named Entity Recognition (NER)
### Languages
- Spanish (es)
### Directory Structure
* README.md
* cantemist.py
* train.conll
* dev.conll
* test.conll
## Dataset Structure
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has 4 columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
<pre>
El cc_onco101 662_664 O
informe cc_onco101 665_672 O
HP cc_onco101 673_675 O
es cc_onco101 676_678 O
compatible cc_onco101 679_689 O
con cc_onco101 690_693 O
adenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA
moderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA
diferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA
que cc_onco101 736_739 O
afecta cc_onco101 740_746 O
a cc_onco101 747_748 O
grasa cc_onco101 749_754 O
peripancreática cc_onco101 755_770 O
sobrepasando cc_onco101 771_783 O
la cc_onco101 784_786 O
serosa cc_onco101 787_793 O
, cc_onco101 793_794 O
infiltración cc_onco101 795_807 O
perineural cc_onco101 808_818 O
. cc_onco101 818_819 O
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 19,397 |
| `dev` | 18,165 |
| `test` | 11,168 |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
The selected clinical case reports are fairly similar to hospital health records. To increase the usefulness and practical relevance of the CANTEMIST corpus, we selected clinical cases affecting all genders and that comprised most ages (from children to the elderly) and of various complexity levels (solid tumors, hemato-oncological malignancies, neuroendocrine cancer...).
The CANTEMIST cases include clinical signs and symptoms, personal and family history, current illness, physical examination, complementary tests (blood tests, imaging, pathology), diagnosis, treatment (including adverse effects of chemotherapy), evolution and outcome.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The manual annotation of the Cantemist corpus was performed by clinical experts following the Cantemist guidelines (for more detail refer to this [paper](http://ceur-ws.org/Vol-2664/cantemist_overview.pdf)). These guidelines contain rules for annotating morphology neoplasms in Spanish oncology clinical cases, as well as for mapping these annotations to eCIE-O.
A medical doctor was regularly consulted by annotators (scientists with PhDs on cancer-related subjects) for the most difficult pathology expressions. This same doctor periodically checked a random selection of annotated clinical records and these annotations were compared and discussed with the annotators. To normalize a selection of very complex cases, MD specialists in pathology from one of the largest university hospitals in Spain were consulted.
#### Who are the annotators?
Clinical experts.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
Not applicable.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
For further information, send an email to ([email protected]).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```bibtex
@article{cantemist,
title={Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results.},
author={Miranda-Escalada, Antonio and Farr{\'e}, Eul{\`a}lia and Krallinger, Martin},
journal={IberLEF@ SEPLN},
pages={303--323},
year={2020}
}
```
### Contributions
[N/A]
|
SoLID/shellcode_i_a32 | SoLID | 2022-11-17T19:53:43Z | 104 | 10 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"source_datasets:original",
"language:code",
"language:en",
"license:gpl-3.0",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2104.13100",
"region:us"
] | [
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- code
- en
license:
- gpl-3.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: shellcode-ia32
---
# Shellcode_IA32
___Shellcode_IA32___ is a dataset containing _20_ years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.
This dataset consists of 3,200 examples of instructions in assembly language for _IA-32_ (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from [exploit-db](https://www.exploit-db.com/shellcodes?platform=linux_x86) and from [shell-storm](http://shell-storm.org/shellcode/).
We enriched the dataset by adding examples of assembly programs for the _IA-32_ architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes.
Our focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with _Netwide Assembler_ (NASM) for Linux.
Each line of _Shellcode\_IA32_ dataset represents a snippet - intent pair. The _snippet_ is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The _intent_ is a comment in the English language.
Further statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: [Shellcode_IA32: A Dataset for Automatic Shellcode Generation](https://arxiv.org/abs/2104.13100).
**Note**: This work was done in collaboration with the [DESSERT Lab](http://www.dessert.unina.it/).
The dataset is also hosted on the [DESSERT Lab Github](https://github.com/dessertlab/Shellcode_IA32).
Please consider citing our work:
```
@inproceedings{liguori-etal-2021-shellcode,
title = "{S}hellcode{\_}{IA}32: A Dataset for Automatic Shellcode Generation",
author = "Liguori, Pietro and
Al-Hossami, Erfan and
Cotroneo, Domenico and
Natella, Roberto and
Cukic, Bojan and
Shaikh, Samira",
booktitle = "Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nlp4prog-1.7",
doi = "10.18653/v1/2021.nlp4prog-1.7",
pages = "58--64",
abstract = "We take the first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments. We assemble and release a novel dataset (Shellcode{\_}IA32), consisting of challenging but common assembly instructions with their natural language descriptions. We experiment with standard methods in neural machine translation (NMT) to establish baseline performance levels on this task.",
}
```
|
copenlu/fever_gold_evidence | copenlu | 2022-11-17T11:42:54Z | 226 | 11 | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-04-02T14:52:35Z | 1 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: ''
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
# Dataset Card for fever_gold_evidence
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
- **Repository:** https://github.com/copenlu/fever-adversarial-attacks
- **Paper:** https://aclanthology.org/2020.emnlp-main.256/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in https://github.com/copenlu/fever-adversarial-attacks
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{atanasova-etal-2020-generating,
title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
author = "Atanasova, Pepa and
Wright, Dustin and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.256",
doi = "10.18653/v1/2020.emnlp-main.256",
pages = "3168--3177",
abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}
``` |
ucberkeley-dlab/measuring-hate-speech | ucberkeley-dlab | 2022-11-15T15:44:31Z | 1,190 | 35 | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"task_ids:sentiment-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2009.10277",
"doi:10.57967/hf/2710",
"region:us",
"arxiv:2009.10277",
"counterspeech",
"hate-speech",
"text-regression",
"irt"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
- sentiment-classification
- multi-label-classification
pretty_name: measuring-hate-speech
tags:
- arxiv:2009.10277
- counterspeech
- hate-speech
- text-regression
- irt
---
## Dataset Description
- **Homepage:** http://hatespeech.berkeley.edu
- **Paper:** https://arxiv.org/abs/2009.10277
# Dataset card for _Measuring Hate Speech_
This is a public release of the dataset described in Kennedy et al. (2020) and Sachdeva et al. (2022), consisting of 39,565 comments annotated by 7,912 annotators, for 135,556 combined rows. The primary outcome variable is the "hate speech score" but the 10 constituent ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech benchmark) can also be treated as outcomes. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 target identity subgroups, as well as 6 annotator demographics and 40 subgroups. The hate speech score incorporates an IRT adjustment by estimating variation in annotator interpretation of the labeling guidelines.
This dataset card is a work in progress and will be improved over time.
## Key dataset columns
* hate_speech_score - continuous hate speech measure, where higher = more hateful and lower = less hateful. > 0.5 is approximately hate speech, < -1 is counter or supportive speech, and -1 to +0.5 is neutral or ambiguous.
* text - lightly processed text of a social media post
* comment\_id - unique ID for each comment
* annotator\_id - unique ID for each annotator
* sentiment - ordinal label that is combined into the continuous score
* respect - ordinal label that is combined into the continuous score
* insult - ordinal label that is combined into the continuous score
* humiliate - ordinal label that is combined into the continuous score
* status - ordinal label that is combined into the continuous score
* dehumanize - ordinal label that is combined into the continuous score
* violence - ordinal label that is combined into the continuous score
* genocide - ordinal label that is combined into the continuous score
* attack\_defend - ordinal label that is combined into the continuous score
* hatespeech - ordinal label that is combined into the continuous score
* annotator_severity - annotator's estimated survey interpretation bias
## Code to download
The dataset can be downloaded using the following python code:
```python
import datasets
dataset = datasets.load_dataset('ucberkeley-dlab/measuring-hate-speech', 'binary')
df = dataset['train'].to_pandas()
df.describe()
```
## Citation
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
## Contributions
Dataset curated by [@ck37](https://github.com/ck37), [@pssachdeva](https://github.com/pssachdeva), et al.
## References
Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277.
Pratik Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano, and Chris Kennedy. 2022. [The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism](https://aclanthology.org/2022.nlperspectives-1.11/). In *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*, pages 83–94, Marseille, France. European Language Resources Association. |
TheGreatRambler/mm2_level | TheGreatRambler | 2022-11-11T08:07:34Z | 11,831 | 9 | [
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-generation",
"multilinguality:multilingual",
"source_datasets:original",
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"text-mining"
] | [
"other",
"object-detection",
"text-retrieval",
"token-classification",
"text-generation"
] | 2022-09-18T20:15:00Z | null | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 levels
tags:
- text-mining
---
# Mario Maker 2 levels
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 levels dataset consists of 26.6 million levels from Nintendo's online service totaling around 100GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 levels dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
Level data is a binary blob describing the actual level and is equivalent to the level format Nintendo uses in-game. It is gzip compressed and needs to be decompressed to be read. To read it you only need to use the provided `level.ksy` kaitai struct file and install the kaitai struct runtime to parse it into an object:
```python
from datasets import load_dataset
from kaitaistruct import KaitaiStream
from io import BytesIO
from level import Level
import zlib
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
level_data = next(iter(ds))["level_data"]
level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data))))
# NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct)
# must iterate by object_count or null objects will be included
for i in range(level.overworld.object_count):
obj = level.overworld.objects[i]
print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id))
#OUTPUT:
X: 1200 Y: 400 ID: ObjId.block
X: 1360 Y: 400 ID: ObjId.block
X: 1360 Y: 240 ID: ObjId.block
X: 1520 Y: 240 ID: ObjId.block
X: 1680 Y: 240 ID: ObjId.block
X: 1680 Y: 400 ID: ObjId.block
X: 1840 Y: 400 ID: ObjId.block
X: 2000 Y: 400 ID: ObjId.block
X: 2160 Y: 400 ID: ObjId.block
X: 2320 Y: 400 ID: ObjId.block
X: 2480 Y: 560 ID: ObjId.block
X: 2480 Y: 720 ID: ObjId.block
X: 2480 Y: 880 ID: ObjId.block
X: 2160 Y: 880 ID: ObjId.block
```
Rendering the level data into an image can be done using [Toost](https://github.com/TheGreatRambler/toost) if desired.
You can also download the full dataset. Note that this will download ~100GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|Data IDs are unique identifiers, gaps in the table are due to levels deleted by users or Nintendo|
|name|string|Course name|
|description|string|Course description|
|uploaded|int|UTC timestamp for when the level was uploaded|
|created|int|Local timestamp for when the level was created|
|gamestyle|int|Gamestyle, enum below|
|theme|int|Theme, enum below|
|difficulty|int|Difficulty, enum below|
|tag1|int|The first tag, if it exists, enum below|
|tag2|int|The second tag, if it exists, enum below|
|game_version|int|The version of the game this level was made on|
|world_record|int|The world record in milliseconds|
|upload_time|int|The upload time in milliseconds|
|upload_attempts|int|The number of attempts it took the uploader to upload|
|num_comments|int|Number of comments, may not reflect the archived comments if there were more than 1000 comments|
|clear_condition|int|Clear condition, enum below|
|clear_condition_magnitude|int|If applicable, the magnitude of the clear condition|
|timer|int|The timer of the level|
|autoscroll_speed|int|A unit of how fast the configured autoscroll speed is for the level|
|clears|int|Course clears|
|attempts|int|Course attempts|
|clear_rate|float|Course clear rate as a float between 0 and 1|
|plays|int|Course plays, or "footprints"|
|versus_matches|int|Course versus matches|
|coop_matches|int|Course coop matches|
|likes|int|Course likes|
|boos|int|Course boos|
|unique_players_and_versus|int|All unique players that have ever played this level, including the number of versus matches|
|weekly_likes|int|The weekly likes on this course|
|weekly_plays|int|The weekly plays on this course|
|uploader_pid|string|The player ID of the uploader|
|first_completer_pid|string|The player ID of the user who first cleared this course|
|record_holder_pid|string|The player ID of the user who held the world record at time of archival |
|level_data|bytes|The GZIP compressed decrypted level data, kaitai struct file is provided for reading|
|unk2|int|Unknown|
|unk3|bytes|Unknown|
|unk9|int|Unknown|
|unk10|int|Unknown|
|unk11|int|Unknown|
|unk12|int|Unknown|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
GameStyles = {
0: "SMB1",
1: "SMB3",
2: "SMW",
3: "NSMBU",
4: "SM3DW"
}
Difficulties = {
0: "Easy",
1: "Normal",
2: "Expert",
3: "Super expert"
}
CourseThemes = {
0: "Overworld",
1: "Underground",
2: "Castle",
3: "Airship",
4: "Underwater",
5: "Ghost house",
6: "Snow",
7: "Desert",
8: "Sky",
9: "Forest"
}
TagNames = {
0: "None",
1: "Standard",
2: "Puzzle solving",
3: "Speedrun",
4: "Autoscroll",
5: "Auto mario",
6: "Short and sweet",
7: "Multiplayer versus",
8: "Themed",
9: "Music",
10: "Art",
11: "Technical",
12: "Shooter",
13: "Boss battle",
14: "Single player",
15: "Link"
}
ClearConditions = {
137525990: "Reach the goal without landing after leaving the ground.",
199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).",
272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).",
375673178: "Reach the goal without taking damage.",
426197923: "Reach the goal as Boomerang Mario.",
436833616: "Reach the goal while wearing a Shoe.",
713979835: "Reach the goal as Fire Mario.",
744927294: "Reach the goal as Frog Mario.",
751004331: "Reach the goal after defeating at least/all (n) Larry(s).",
900050759: "Reach the goal as Raccoon Mario.",
947659466: "Reach the goal after defeating at least/all (n) Blooper(s).",
976173462: "Reach the goal as Propeller Mario.",
994686866: "Reach the goal while wearing a Propeller Box.",
998904081: "Reach the goal after defeating at least/all (n) Spike(s).",
1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).",
1051433633: "Reach the goal while holding a Koopa Shell.",
1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).",
1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).",
1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).",
1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.",
1151250770: "Reach the goal while wearing a Goomba Mask.",
1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.",
1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).",
1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).",
1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.",
1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).",
1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).",
1283945123: "Reach the goal on a Lakitu's Cloud.",
1344044032: "Reach the goal after defeating at least/all (n) Boo(s).",
1425973877: "Reach the goal after defeating at least/all (n) Roy(s).",
1429902736: "Reach the goal while holding a Trampoline.",
1431944825: "Reach the goal after defeating at least/all (n) Morton(s).",
1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).",
1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).",
1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).",
1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).",
1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.",
1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.",
1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).",
1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).",
1780278293: "Reach the goal as Superball Mario.",
1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).",
1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).",
2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).",
2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).",
2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).",
2076496776: "Reach the goal while wearing a Bullet Bill Mask.",
2089161429: "Reach the goal as Big Mario.",
2111528319: "Reach the goal as Cat Mario.",
2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).",
2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).",
2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).",
2549654281: "Reach the goal while wearing a Dry Bones Shell.",
2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).",
2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).",
2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).",
2855236681: "Reach the goal as Flying Squirrel Mario.",
3036298571: "Reach the goal as Buzzy Mario.",
3074433106: "Reach the goal as Builder Mario.",
3146932243: "Reach the goal as Cape Mario.",
3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).",
3206222275: "Reach the goal while wearing a Cannon Box.",
3314955857: "Reach the goal as Link.",
3342591980: "Reach the goal while you have Super Star invincibility.",
3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).",
3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).",
3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).",
3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).",
3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).",
3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).",
3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).",
3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).",
3513732174: "Reach the goal as SMB2 Mario.",
3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.",
3725246406: "Reach the goal as Spiny Mario.",
3730243509: "Reach the goal in a Koopa Troopa Car.",
3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).",
3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.",
3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.",
3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).",
3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).",
3874680510: "Reach the goal after breaking at least/all (n) Crates(s).",
3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).",
3977257962: "Reach the goal as Super Mario.",
4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).",
4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).",
4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).",
4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).",
4153835197: "Reach the goal as Balloon Mario.",
4172105156: "Reach the goal while wearing a Red POW Box.",
4209535561: "Reach the Goal while riding Yoshi.",
4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).",
4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)."
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of levels from many different Mario Maker 2 players globally and as such their titles and descriptions could contain harmful language. Harmful depictions could also be present in the level data, should you choose to render it.
|
SLPL/naab | SLPL | 2022-11-03T06:33:48Z | 24,839 | 38 | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"language:fa",
"license:mit",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2208.13486",
"region:us"
] | [
"fill-mask",
"text-generation"
] | 2022-08-18T13:47:40Z | null | ---
language:
- fa
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab (A ready-to-use plug-and-play corpus in Farsi)
---
# naab: A ready-to-use plug-and-play corpus in Farsi
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
- **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
- **Point of Contact:** [Sadra Sabouri](mailto:[email protected])
### Dataset Summary
naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
You can use this corpus by the commands below:
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab")
```
You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
```python
from datasets import load_dataset
dataset = load_dataset("SLPL/naab", split="train[:10%]")
```
**Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage, you can use below code snippet helping you download your costume sections of the naab:**
```python
from datasets import load_dataset
# ==========================================================
# You should just change this part in order to download your
# parts of corpus.
indices = {
"train": [5, 1, 2],
"test": [0, 2]
}
# ==========================================================
N_FILES = {
"train": 126,
"test": 3
}
_BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/"
data_url = {
"train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])],
"test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])],
}
for index in indices['train']:
assert index < N_FILES['train']
for index in indices['test']:
assert index < N_FILES['test']
data_files = {
"train": [data_url['train'][i] for i in indices['train']],
"test": [data_url['test'][i] for i in indices['test']]
}
print(data_files)
dataset = load_dataset('text', data_files=data_files, use_auth_token=True)
```
### Supported Tasks and Leaderboards
This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
- `language-modeling`
- `masked-language-modeling`
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
```
+ `text` : the textual paragraph.
### Data Splits
This dataset includes two splits (`train` and `test`). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (`train`, `test`). Since `validation` is usually occurring during training with the `train` dataset we avoid proposing another split for it.
| | train | test |
|-------------------------|------:|-----:|
| Input Sentences | 225892925 | 11083849 |
| Average Sentence Length | 61 | 25 |
Below you can see the log-based histogram of word/paragraph over the two splits of the dataset.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png">
</div>
## Dataset Creation
### Curation Rationale
Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides **126GB** (including more than **224 million** sequences and nearly **15 billion** words) as the training corpus and **2.3GB** (including nearly **11 million** sequences and nearly **300 million** words) as the test corpus.
### Source Data
The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections.
<div align="center">
<img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
</div>
#### Persian NLP
[This](https://github.com/persiannlp/persian-raw-text) corpus includes eight corpora that are sorted based on their volume as below:
- [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
- [MirasText](https://github.com/miras-tech/MirasText): 12G
- [W2C – Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9): 1GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/w2c_merged.txt))
- Persian Wikipedia (March 2020 dump): 787MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/fawiki_merged.txt))
- [Leipzig Corpora](https://corpora.uni-leipzig.de/): 424M ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/LeipzigCorpus.txt))
- [VOA corpus](https://jon.dehdari.org/corpora/): 66MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/voa_persian_2003_2008_cleaned.txt))
- [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus): 61MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/poems_merged.txt))
- [TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php): 33MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/TEP_fa.txt))
#### AGP
This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
#### OSCAR-fa
[OSCAR](https://oscar-corpus.com/) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
#### Telegram
Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
#### LSCP
[The Large Scale Colloquial Persian Language Understanding dataset](https://iasbs.ac.ir/~ansari/lscp/) has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
#### Initial Data Collection and Normalization
The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to [ASR Gooyesh Pardaz](https://asr-gooyesh.com/en/) we were provided with enough textual data to start the naab journey.
We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided [here](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess).
### Personal and Sensitive Information
Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP.
We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful.
## Additional Information
### Dataset Curators
+ Sadra Sabouri (Sharif University of Technology)
+ Elnaz Rahmati (Sharif University of Technology)
### Licensing Information
mit?
### Citation Information
```
@article{sabouri2022naab,
title={naab: A ready-to-use plug-and-play corpus for Farsi},
author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
journal={arXiv preprint arXiv:2208.13486},
year={2022}
}
```
DOI: [https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486)
### Contributions
Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.
### Keywords
+ Farsi
+ Persian
+ raw text
+ پیکره فارسی
+ پیکره متنی
+ آموزش مدل زبانی
|
somosnlp-hackathon-2022/spanish-to-quechua | somosnlp-hackathon-2022 | 2022-10-25T10:03:46Z | 210 | 11 | [
"task_categories:translation",
"language:es",
"language:qu",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation"
] | 2022-04-03T04:02:58Z | 1 | ---
language:
- es
- qu
task_categories:
- translation
task:
- translation
---
# Spanish to Quechua
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [team members](#team-members)
## Dataset Description
This dataset is a recopilation of webs and others datasets that shows in [dataset creation section](#dataset-creation). This contains translations from spanish (es) to Qechua of Ayacucho (qu).
## Dataset Structure
### Data Fields
- es: The sentence in Spanish.
- qu: The sentence in Quechua of Ayacucho.
### Data Splits
- train: To train the model (102 747 sentences).
- Validation: To validate the model during training (12 844 sentences).
- test: To evaluate the model when the training is finished (12 843 sentences).
## Dataset Creation
### Source Data
This dataset has generated from:
- "Mundo Quechua" by "Ivan Acuña" - [available here](https://mundoquechua.blogspot.com/2006/07/frases-comunes-en-quechua.html)
- "Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua" by "El comercio" - [available here](https://elcomercio.pe/tecnologia/actualidad/traductor-frases-romanticas-quechua-noticia-467022-noticia/)
- "Piropos y frases de amor en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2019/12/palabras-en-quechua-de-amor.html)
- "Corazón en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2020/05/corazon-en-quechua.html)
- "Oraciones en Español traducidas a Quechua" by "Tatoeba" - [available here](https://tatoeba.org/es/sentences/search?from=spa&query=&to=que)
- "AmericasNLP 2021 Shared Task on Open Machine Translation" by "americasnlp2021" - [available here](https://github.com/AmericasNLP/americasnlp2021/tree/main/data/quechua-spanish/parallel_data/es-quy)
### Data cleaning
- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.
## Considerations for Using the Data
This is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.
## Team members
- [Sara Benel](https://huggingface.co/sbenel)
- [Jose Vílchez](https://huggingface.co/JCarlos) |
THUDM/humaneval-x | THUDM | 2022-10-25T06:08:38Z | 998 | 85 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:apache-2.0",
"size_categories:n<1K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | 2022-09-20T16:23:53Z | null | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: HumanEval-X
---
# HumanEval-X
## Dataset Description
[HumanEval-X](https://github.com/THUDM/CodeGeeX) is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.
## Languages
The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.
## Dataset Structure
To load the dataset you need to specify a subset among the 5 exiting languages `[python, cpp, go, java, js]`. By default `python` is loaded.
```python
from datasets import load_dataset
load_dataset("THUDM/humaneval-x", "js")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'declaration', 'canonical_solution', 'test', 'example_test'],
num_rows: 164
})
})
```
```python
next(iter(data["test"]))
{'task_id': 'JavaScript/0',
'prompt': '/* Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> hasCloseElements([1.0, 2.0, 3.0], 0.5)\n false\n >>> hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n true\n */\nconst hasCloseElements = (numbers, threshold) => {\n',
'declaration': '\nconst hasCloseElements = (numbers, threshold) => {\n',
'canonical_solution': ' for (let i = 0; i < numbers.length; i++) {\n for (let j = 0; j < numbers.length; j++) {\n if (i != j) {\n let distance = Math.abs(numbers[i] - numbers[j]);\n if (distance < threshold) {\n return true;\n }\n }\n }\n }\n return false;\n}\n\n',
'test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) === true)\n console.assert(\n hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) === false\n )\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) === true)\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) === false)\n console.assert(hasCloseElements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) === false)\n}\n\ntestHasCloseElements()\n',
'example_test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.0], 0.5) === false)\n console.assert(\n hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) === true\n )\n}\ntestHasCloseElements()\n'}
```
## Data Fields
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
* ``prompt``: the function declaration and docstring, used for code generation.
* ``declaration``: only the function declaration, used for code translation.
* ``canonical_solution``: human-crafted example solutions.
* ``test``: hidden test samples, used for evaluation.
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
## Data Splits
Each subset has one split: test.
## Citation Information
Refer to https://github.com/THUDM/CodeGeeX. |
rahular/itihasa | rahular | 2022-10-24T18:06:01Z | 621 | 19 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"source_datasets:original",
"language:sa",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"conditional-text-generation"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- sa
- en
license:
- apache-2.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: Itihasa
metrics:
- bleu
- sacrebleu
- rouge
- ter
- chrF
tags:
- conditional-text-generation
---
# Itihāsa
Itihāsa is a Sanskrit-English translation corpus containing 93,000 Sanskrit shlokas and their English translations extracted from M. N. Dutt's seminal works on The Rāmāyana and The Mahābhārata. The paper which introduced this dataset can be found [here](https://aclanthology.org/2021.wat-1.22/).
This repository contains the randomized train, development, and test sets. The original extracted data can be found [here](https://github.com/rahular/itihasa/tree/gh-pages/res) in JSON format. If you just want to browse the data, you can go [here](http://rahular.com/itihasa/).
## Usage
```
>> from datasets import load_dataset
>> dataset = load_dataset("rahular/itihasa")
>> dataset
DatasetDict({
train: Dataset({
features: ['translation'],
num_rows: 75162
})
validation: Dataset({
features: ['translation'],
num_rows: 6149
})
test: Dataset({
features: ['translation'],
num_rows: 11722
})
})
>> dataset['train'][0]
{'translation': {'en': 'The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies.',
'sn': 'ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥'}}
```
## Citation
If you found this dataset to be useful, please consider citing the paper as follows:
```
@inproceedings{aralikatte-etal-2021-itihasa,
title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
author = "Aralikatte, Rahul and
de Lhoneux, Miryam and
Kunchukuttan, Anoop and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wat-1.22",
pages = "191--197",
abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
}
``` |
softcatala/catalan-dictionary | softcatala | 2022-10-24T17:38:30Z | 55 | 1 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ca",
"license:gpl-2.0",
"license:lgpl-2.1",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- gpl-2.0
- lgpl-2.1
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: catalan-dictionary
---
# Dataset Card for ca-text-corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/Softcatala/catalan-dict-tools
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Catalan word lists with part of speech labeling curated by humans. Contains 1 180 773 forms including verbs, nouns, adjectives, names or toponyms. These word lists are used to build applications like Catalan spellcheckers or verb querying applications.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Catalan (`ca`).
## Dataset Structure
The dataset contains 3 columns:
* Form (e.g. cantaré)
* Lemma (e.g. cantar)
* POS tag (e.g. VMIF1S00)
You can have the meaning of the POS tag here: https://freeling-user-manual.readthedocs.io/en/latest/tagsets/tagset-ca/#part-of-speech-verb
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[LGPL 2.1](https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html).
[GPL 2.0](https://www.gnu.org/licenses/old-licenses/gpl-2.0.html).
### Citation Information
[More Information Needed]
### Contributions
Softcatalà
Jaume Ortolà
Joan Moratinos |
GEM/Taskmaster | GEM | 2022-10-24T15:30:09Z | 108 | 2 | [
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2012.12458",
"region:us",
"dialog-response-generation"
] | [
"conversational"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conversational
task_ids: []
pretty_name: Taskmaster
tags:
- dialog-response-generation
---
# Dataset Card for GEM/Taskmaster
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
- **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
- **Paper:** https://arxiv.org/abs/2012.12458
- **Leaderboard:** N/A
- **Point of Contact:** Karthik Krishnamoorthi
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/Taskmaster).
### Dataset Summary
This is a large task-oriented dialog dataset in which a model has to produce the response. The input contains the context and a structured representation of what the model is supposed to generate. The input is already pre-formatted as string, turning this into a pure text-to-text problem.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/Taskmaster')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/Taskmaster).
#### website
[Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
#### paper
[Arxiv](https://arxiv.org/abs/2012.12458)
#### authors
Google researchers
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Arxiv](https://arxiv.org/abs/2012.12458)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@article{byrne2020tickettalk,
title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
author={Byrne, Bill and Krishnamoorthi, Karthik and Ganesh, Saravanan and Kale, Mihir Sanjay},
journal={arXiv preprint arXiv:2012.12458},
year={2020}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Karthik Krishnamoorthi
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
NA
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
NA
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-4.0: Creative Commons Attribution 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Dialogues
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Dialog Response Generation
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
a movie ticketing dialog dataset with 23,789 annotated conversations.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`other`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
NA
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Google researchers
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Tosin Adewumi (Luleå University of Technology)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `gem_id`: The unique example id
- `context`: The context of the conversation
- `target`: A string representing the target
-`references`: A List representing the target(s)
-`conversation_id`: A unique ID of the conversation
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
NA
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
NA
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{'context': "<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated R<C><U>I wanna see a movie<A>where are you?<U>spring hills kansas<PN>find_theaters<PAN>location<PAV>spring hills kansas<PR>find_theaters<PRAN>name.theater<PRAV>AMC Holiday Theater<PRAV>Cinemark Downtown<A>there are 2 theaters near you, the AMC Holiday Theater and Cinemark Downtown. Did you know which movie you'd like to see?<U>funny one please<PN>find_movies<PAN>location<PAV>spring hills kansas<PR>find_movies<PRAN>name.movie<PRAV>Not My Problem<PRAV>Family Jewels<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Matt Damon<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Noah Schnapp<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>romantic comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Melissa McCarthy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Ryan Reynolds<A>There's the comedy film called Not My Problem starring Matt Damon and Noah Schnapp. There's also a romantic comedy called Family Jewels starring Melissa McCarthy and Ryan Reynolds.<U>what ratings are there?<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>rating.movie<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated PG-13<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>rating.movie",
'conversation_id': 'dlg-d1f52e7e-c34c-4e85-b406-85ed138b5068',
'gem_id': 'Taskmaster-train-0',
'references': ['Not My Problem is rated PG-13 and Family Jewels is rated R.'],
'target': 'Not My Problem is rated PG-13 and Family Jewels is rated R.'}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
-`train`: 187182 examples
-`dev`: 23406 examples
-`test`: 23316 examples
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
NA
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
NA
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
Dialogue generation that makes sense
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
NA
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
NA
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
gem_id field was added to the 3 data splits
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
NA
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
BLEU: 60
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
automatic evaluation
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
NA
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
NA
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
NA
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
a movie ticketing dialog dataset with 23,789 annotated conversations.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Participatory experiment`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
NA
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Ticketing
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
NA
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
It's based on ticketing without personal information
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
NA
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
NA
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
NA
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
NA
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
NA
|
GEM/SciDuet | GEM | 2022-10-24T15:30:06Z | 73 | 3 | [
"task_categories:other",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"text-to-slide"
] | [
"other"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- apache-2.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: SciDuet
tags:
- text-to-slide
---
# Dataset Card for GEM/SciDuet
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/GEM/SciDuet
- **Repository:** https://github.com/IBM/document2slides/tree/main/SciDuet-ACL
- **Paper:** https://aclanthology.org/2021.naacl-main.111/
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/SciDuet).
### Dataset Summary
This dataset supports the document-to-slide generation task where a model has to generate presentation slide content from the text of a document.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/SciDuet')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/SciDuet).
#### website
[Huggingface](https://huggingface.co/datasets/GEM/SciDuet)
#### paper
[ACL Anthology](https://aclanthology.org/2021.naacl-main.111/)
#### authors
Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Huggingface](https://huggingface.co/datasets/GEM/SciDuet)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/IBM/document2slides/tree/main/SciDuet-ACL)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2021.naacl-main.111/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{sun-etal-2021-d2s,
title = "{D}2{S}: Document-to-Slide Generation Via Query-Based Text Summarization",
author = "Sun, Edward and
Hou, Yufang and
Wang, Dakuo and
Zhang, Yunfeng and
Wang, Nancy X. R.",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.111",
doi = "10.18653/v1/2021.naacl-main.111",
pages = "1405--1418",
abstract = "Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming. There has been limited research aiming to automate the document-to-slides generation process and all face a critical challenge: no publicly available dataset for training and benchmarking. In this work, we first contribute a new dataset, SciDuet, consisting of pairs of papers and their corresponding slides decks from recent years{'} NLP and ML conferences (e.g., ACL). Secondly, we present D2S, a novel system that tackles the document-to-slides task with a two-step approach: 1) Use slide titles to retrieve relevant and engaging text, figures, and tables; 2) Summarize the retrieved context into bullet points with long-form question answering. Our evaluation suggests that long-form QA outperforms state-of-the-art summarization baselines on both automated ROUGE metrics and qualitative human evaluation.",
}
```
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
apache-2.0: Apache License 2.0
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Promote research on the task of document-to-slides generation
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Text-to-Slide
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
IBM Research
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
IBM Research
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Yufang Hou (IBM Research), Dakuo Wang (IBM Research)
### Dataset Structure
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
The original papers and slides (both are in PDF format) are carefully processed by a combination of PDF/Image processing tookits. The text contents from multiple slides that correspond to the same slide title are mreged.
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
Training, validation and testing data contain 136, 55, and 81 papers from ACL Anthology and their corresponding slides, respectively.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The dataset integrated into GEM is the ACL portion of the whole dataset described in the [paper](https://aclanthology.org/2021.naacl-main.111), It contains the full Dev and Test sets, and a portion of the Train dataset.
Note that although we cannot release the whole training dataset due to copyright issues, researchers can still use our released data procurement code to generate the training dataset from the online ICML/NeurIPS anthologies.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
SciDuet is the first publicaly available dataset for the challenging task of document2slides generation, which requires a model has a good ability to "understand" long-form text, choose appropriate content and generate key points.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
content selection, long-form text undersanding and generation
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
content selection, long-form text undersanding and key points generation
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Automatical Evaluation Metric: ROUGE
Human Evaluation: (Readability, Informativeness, Consistency)
1) Readability: The generated slide content is coherent, concise, and grammatically correct;
2) Informativeness: The generated slide provides sufficient and necessary information that corresponds to the given slide title, regardless of its similarity to the original slide;
3) Consistency: The generated slide content is similar to the original author’s reference slide.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE + Human Evaluation
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
Paper "D2S: Document-to-Slide Generation Via Query-Based
Text Summarization" reports 20.47, 5.26 and 19.08 for ROUGE-1, ROUGE-2 and ROUGE-L (f-score).
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
Provide a benchmark dataset for the document-to-slides task.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Other`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
Text on papers was extracted through Grobid. Figures andcaptions were extracted through pdffigures. Text on slides was extracted through IBM Watson Discovery package and OCR by pytesseract. Figures and tables that appear on slides and papers were linked through multiscale template matching by OpenCV. Further dataset
cleaning was performed with standard string-based
heuristics on sentence building, equation and floating caption removal, and duplicate line deletion.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
the slide context text shouldn't contain additional format information such as "*** University"
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
The original dataset was open-sourced under Apache-2.0.
Some of the original dataset creators are part of the GEM v2 dataset infrastructure team and take care of integrating this dataset into GEM.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
yes/very likely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`research use only`
### Known Technical Limitations
|
liweili/c4_200m | liweili | 2022-10-23T11:00:46Z | 176 | 38 | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"grammatical-error-correction"
] | [
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
c4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` |
wdc/products-2017 | wdc | 2022-10-23T05:50:24Z | 1,371 | 12 | [
"task_categories:text-classification",
"annotations_creators:weak supervision",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-classification",
"data-integration"
] | 2022-05-16T13:23:21Z | 2 | ---
annotations_creators:
- weak supervision
- expert-generated
language:
- en
language_bcp47:
- en-US
license:
- unknown
multilinguality:
- monolingual
pretty_name: products-2017
size_categories:
- 1K<n<10K
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- data-integration
task_ids:
- entity-matching
- identity-resolution
- product-matching
paperswithcode_id: wdc-products
---
# Dataset Card for [products-2017]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [LSPCv2 Homepage](http://webdatacommons.org/largescaleproductcorpus/v2/index.html)
- **Point of Contact:** [Ralph Peeters](mailto:[email protected])
### Dataset Summary
Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match")
In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test set. We provide training and validation sets in four different sizes for four product categories. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites.
### Supported Tasks and Leaderboards
Entity Matching, Product Matching
### Languages
English
## Dataset Structure
### Data Instances
The data is structured as pairs of product offers with the corresponding match/non-match label. This is an example instance from the computers category:
```
{"pair_id":"581109#16637861","label":0,"id_left":581109,"category_left":"Computers_and_Accessories","cluster_id_left":1324529,"brand_left":"\"Gigabyte\"@en","title_left":" \"Gigabyte Radeon RX 480 G1 Gaming 4096MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_left":"\"GV-RX480G1 GAMING-4GD, Core Clock: 1202MHz, Boost Clock: 1290MHz, Memory: 4096MB 7000MHz GDDR5, Stream Processors: 2304, Crossfire Ready, VR Ready, FreeSync Ready, 3 Years Warranty\"@en ","price_left":null,"specTableContent_left":null,"id_right":16637861,"category_right":"Computers_and_Accessories","cluster_id_right":107415,"brand_right":"\"Gigabyte\"@en","title_right":" \"Gigabyte Radeon RX 550 Gaming OC 2048MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_right":"\"GV-RX550GAMING OC-2GD, Boost: 1219MHz, Memory: 2048MB 7000MHz GDDR5, Stream Processors: 512, DirectX 12 Support, 3 Years Warranty\"@en ","price_right":null,"specTableContent_right":null}
```
### Data Fields
- pair_id: unique identifier of a pair (string)
- label: binary label, match or non-match (int)
The following attributes are contained twice, once for the first and once for the second product offer
- id: unique id of the product offer (int)
- category: product category (string)
- cluster_id: id of the product cluster from the original corpus this offer belongs to (int)
- brand: brand of the product (string)
- title: product title (string)
- description: longer product description (string)
- price: price of the product offer (string)
- specTableContent: additional data found in specification tables on the webpage that contains the product offer (string)
### Data Splits
- Computers
- Test set - 1100 pairs
- Small Train set - 2267 pairs
- Small Validation set - 567 pairs
- Medium Train set - 6475 pairs
- Medium Validation set - 1619 pairs
- Large Train set - 26687 pairs
- Large Validation set - 6672 pairs
- XLarge Train set - 54768 pairs
- Xlarge Validation set - 13693 pairs
- Cameras
- Test set - 1100 pairs
- Small Train set - 1508 pairs
- Small Validation set - 378 pairs
- Medium Train set - 4204 pairs
- Medium Validation set - 1051 pairs
- Large Train set - 16028 pairs
- Large Validation set - 4008 pairs
- XLarge Train set - 33821 pairs
- Xlarge Validation set - 8456 pairs
- Watches
- Test set - 1100 pairs
- Small Train set - 1804 pairs
- Small Validation set - 451 pairs
- Medium Train set - 5130 pairs
- Medium Validation set - 1283 pairs
- Large Train set - 21621 pairs
- Large Validation set - 5406 pairs
- XLarge Train set - 49255 pairs
- Xlarge Validation set - 12314 pairs
- Shoes
- Test set - 1100 pairs
- Small Train set - 1650 pairs
- Small Validation set - 413 pairs
- Medium Train set - 4644 pairs
- Medium Validation set - 1161 pairs
- Large Train set - 18391 pairs
- Large Validation set - 4598 pairs
- XLarge Train set - 33943 pairs
- Xlarge Validation set - 8486 pairs
## Dataset Creation
### Annotations
#### Annotation process
- Training and Validation sets: distant supervision via shared schema.org product IDs
- Test sets: Single expert annotator
#### Who are the annotators?
[Ralph Peeters](https://www.uni-mannheim.de/dws/people/researchers/phd-students/ralph-peeters/)
## Additional Information
### Citation Information
```
@inproceedings{primpeli2019wdc,
title={The WDC training dataset and gold standard for large-scale product matching},
author={Primpeli, Anna and Peeters, Ralph and Bizer, Christian},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={381--386},
year={2019}
}
```
|
jfrenz/legalglue | jfrenz | 2022-10-22T22:14:36Z | 166 | 17 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"multilinguality:multilingual",
"source_datasets:extended",
"language:en",
"language:da",
"language:de",
"language:nl",
"language:sv",
"language:bg",
"language:cs",
"language:hr",
"language:pl",
"language:sk",
"language:sl",
"language:es",
"language:fr",
"language:it",
"language:pt",
"language:ro",
"language:et",
"language:fi",
"language:hu",
"language:lt",
"language:lv",
"language:el",
"language:mt",
"arxiv:2003.13016",
"arxiv:2110.00806",
"arxiv:2109.00904",
"region:us",
"german-ler",
"lener-br"
] | [
"text-classification",
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
- da
- de
- nl
- sv
- bg
- cs
- hr
- pl
- sk
- sl
- es
- fr
- it
- pt
- ro
- et
- fi
- hu
- lt
- lv
- el
- mt
multilinguality:
- multilingual
source_datasets:
- extended
task_categories:
- text-classification
- token-classification
task_ids:
- named-entity-recognition
- multi-label-classification
- topic-classification
pretty_name: LegalGLUE
tags:
- german-ler
- lener-br
---
# Dataset Card for "LegalGLUE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://git.rwth-aachen.de/johanna.frenz/legalglue
### Dataset Summary
The "Legal General Language Understanding Evaluation" (LegalGLUE) dataset was created as part of a bachelor thesis.
It consists of four already existing datasets covering three task types and a total of 23 different languages.
### Supported Tasks
<table>
<tr><td>Dataset</td><td>Source</td><td>Task Type</td><td>Languages</td><tr>
<tr><td>German_LER</td><td> <a href="https://arxiv.org/abs/2003.13016">Leitner et al.</a></td><td>Named Entity Recognition</td><td>German</td></tr>
<tr><td>LeNER_Br</td><td> <a href="https://github.com/peluz/lener-br"> de Araujo et al., 2018</a></td><td>Named Entity Recognition</td><td> Portuguese </td></tr>
<tr><td>SwissJudgmentPrediction</td><td> <a href="https://arxiv.org/abs/2110.00806">Niklaus et al.</a> </td><td>Binary Text Classification</td><td>German, French, Italian</td></tr>
<tr><td>MultEURLEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. </a> </td><td>Multi-label Text Classification</td><td>23 languages (see below)</td></tr>
</table>
### Languages
see Split section
## Dataset Structure
### Data Instances
#### German_LER
German_LER example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'german_ler')
```
```json
{
'id': '66722',
'tokens':['4.', 'Die', 'Kostenentscheidung', 'für', 'das', 'gerichtliche', 'Antragsverfahren', 'beruht', 'auf', '§', '21', 'Abs.', '2', 'Satz', '1', 'i.', 'V.', 'm.', '§', '20', 'Abs.', '1', 'Satz', '1', 'WBO', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 38]
}
```
#### LeNER-Br
LeNER-Br example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'lener_br')
```
```json
{
'id': '7826',
'tokens': ['Firmado', 'por', 'assinatura', 'digital', '(', 'MP', '2.200-2/2001', ')', 'JOSÉ', 'ROBERTO', 'FREIRE', 'PIMENTA', 'Ministro', 'Relator', 'fls', '.', 'PROCESSO', 'Nº', 'TST-RR-1603-79.2010.5.20.0001'],
'ner_tags': [0, 0, 0, 0, 0, 9, 10, 0, 3, 4, 4, 4, 0, 0, 0, 0, 11, 12, 12]}
```
#### SwissJudgmentPrediction
swissJudgmentPrediction_de example
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'swissJudgmentPrediction_de')
```
```json
{
'id': 48755,
'year': 2014,
'text': "Sachverhalt: A. X._ fuhr am 25. Juli 2012 bei Mülligen mit seinem Personenwagen auf dem zweiten Überholstreifen der Autobahn A1 in Richtung Zürich. Gemäss Anklage schloss er auf einen Lieferwagen auf und schwenkte vom zweiten auf den ersten Überholstreifen aus. Danach fuhr er an zwei Fahrzeugen rechts vorbei und wechselte auf die zweite Überholspur zurück. B. Das Obergericht des Kantons Aargau erklärte X._ am 14. Januar 2014 zweitinstanzlich der groben Verletzung der Verkehrsregeln schuldig. Es bestrafte ihn mit einer bedingten Geldstrafe von 30 Tagessätzen zu Fr. 430.-- und einer Busse von Fr. 3'000.--. C. X._ führt Beschwerde in Strafsachen. Er beantragt, er sei von Schuld und Strafe freizusprechen. Eventualiter sei die Sache an die Vorinstanz zurückzuweisen. ",
'label': 0,
'language': 'de',
'region': 'Northwestern Switzerland',
'canton': 'ag',
'legal area': 'penal law'
}
```
#### MultiEURLEX
Monolingual example out of the MultiEURLEX-Dataset
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de')
```
```json
{
'celex_id': '32002R0130',
'text': 'Verordnung (EG) Nr. 130/2002 der Kommission\nvom 24. Januar 2002\nbezüglich der im Rahmen der Auss...',
'labels': [3, 17, 5]}
```
Multilingual example out of the MultiEURLEX-Dataset
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_all_languages')
```
```json
{
'celex_id': '32002R0130',
'text': {
'bg': None,
'cs': None,
'da': 'Kommissionens ...',
'de': 'Verordnung ... ',
'el': '...',
'en': '...',
...
},
'labels': [3, 17, 5]
}
```
### Data Fields
#### German_LER
- `id`: id of the sample
- `tokens`: the tokens of the sample text
- `ner_tags`: the NER tags of each token
#### LeNER_Br
- `id`: id of the sample
- `tokens`: the tokens of the sample text
- `ner_tags`: the NER tags of each token
#### SwissJudgmentPrediction
- `id`: (**int**) ID of the document
- `year`: (**int**) the publication year
- `text`: (**str**) the facts of the case
- `label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval)
- `language`: (**str**) one of (de, fr, it)
- `region`: (**str**) the region of the lower court
- `canton`: (**str**) the canton of the lower court
- `legal area`: (**str**) the legal area of the case
#### MultiEURLEX
Monolingual use:
- `celex_id`: (**str**) Official Document ID of the document
- `text`: (**str**) An EU Law
- `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels)
Multilingual use:
- `celex_id`: (**str**) Official Document ID of the document
- `text`: (dict[**str**]) A dictionary with the 23 languages as keys and the corresponding EU Law as values.
- `labels`: (**List[int]**) List of relevant EUROVOC concepts (labels)
The labels lists consists per default of level 1 EUROVOC concepts. Can be changed by adding the label_level parameter when loading the dataset. (available levels: level_1, level_2, level_3, all_levels)
```python
from datasets import load_dataset
dataset = load_dataset('jfrenz/legalglue', 'multi_eurlex_de', label_level="level_3")
```
### Data Splits
<table>
<tr><th>Dataset</th><th> Language </th> <th> ISO code </th> <th> Number of Documents train/dev/test </th> </tr>
<tr><td>German-LER</td><td>German</td> <td><b>de</b></td> <td> 66723 / - / - </td> </tr>
<tr><td>LeNER-Br</td><td>Portuguese</td> <td><b>pt</b></td> <td> 7828 / 1177 / 1390 </td> </tr>
<tr><td rowspan="3">SwissJudgmentPrediction</td><td>German</td> <td><b>de</b></td> <td> 35458 / 4705 / 9725 </td> </tr>
<tr><td> French </td><td><b>fr</b></td><td> 21179 / 3095 / 6820 </td> </tr>
<tr><td> Italian </td><td><b>it</b></td><td> 3072 / 408 / 812 </td> </tr>
<tr><td rowspan="23">MultiEURLEX</td><td>English </td> <td><b>en</b></td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Italian </td> <td> <b>it</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Spanish </td> <td> <b>es</b> </td> <td> 52,785 / 5,000 / 5,000 </td> </tr>
<tr><td> Polish </td> <td> <b>pl</b> </td> <td> 23,197 / 5,000 / 5,000 </td> </tr>
<tr><td> Romanian </td> <td> <b>ro</b> </td> <td> 15,921 / 5,000 / 5,000 </td> </tr>
<tr><td> Dutch </td> <td> <b>nl</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Hungarian </td> <td> <b>hu</b> </td> <td> 22,664 / 5,000 / 5,000 </td> </tr>
<tr><td> Portuguese </td> <td> <b>pt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Czech </td> <td> <b>cs</b> </td> <td> 23,187 / 5,000 / 5,000 </td> </tr>
<tr><td> Swedish </td> <td> <b>sv</b> </td> <td> 42,490 / 5,000 / 5,000 </td> </tr>
<tr><td> Bulgarian </td> <td> <b>bg</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Danish </td> <td> <b>da</b> </td> <td> 55,000 / 5,000 / 5,000 </td> </tr>
<tr><td> Finnish </td> <td> <b>fi</b> </td> <td> 42,497 / 5,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> 15,986 / 5,000 / 5,000 </td> </tr>
<tr><td> Lithuanian </td> <td> <b>lt</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Croatian </td> <td> <b>hr</b> </td> <td> 7,944 / 2,500 / 5,000 </td> </tr>
<tr><td> Slovene </td> <td> <b>sl</b> </td> <td> 23,184 / 5,000 / 5,000 </td> </tr>
<tr><td> Estonian </td> <td> <b>et</b> </td> <td> 23,126 / 5,000 / 5,000 </td> </tr>
<tr><td> Latvian </td> <td> <b>lv</b> </td> <td> 23,188 / 5,000 / 5,000 </td> </tr>
<tr><td> Maltese </td> <td> <b>mt</b> </td> <td> 17,521 / 5,000 / 5,000 </td> </tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
codeparrot/github-code | codeparrot | 2022-10-20T15:01:14Z | 18,962 | 325 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"region:us"
] | [
"text-generation"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# GitHub Code Dataset
## Dataset Description
The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"])
print(next(iter(ds))["code"])
#OUTPUT:
"""\
FROM rockyluke/ubuntu:precise
ENV DEBIAN_FRONTEND="noninteractive" \
TZ="Europe/Amsterdam"
...
"""
```
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"])
licenses = []
for element in iter(ds).take(10_000):
licenses.append(element["license"])
print(Counter(licenses))
#OUTPUT:
Counter({'mit': 9896, 'isc': 104})
```
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
```python
ds = load_dataset("codeparrot/github-code", split="train")
```
## Data Structure
### Data Instances
```python
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code|string|content of source file|
|repo_name|string|name of the GitHub repository|
|path|string|path of file in GitHub repository|
|language|string|programming language as inferred by extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
The dataset only contains a train split.
## Languages
The dataset contains 30 programming languages with over 60 extensions:
```python
{
"Assembly": [".asm"],
"Batchfile": [".bat", ".cmd"],
"C": [".c", ".h"],
"C#": [".cs"],
"C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"],
"CMake": [".cmake"],
"CSS": [".css"],
"Dockerfile": [".dockerfile", "Dockerfile"],
"FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'],
"GO": [".go"],
"Haskell": [".hs"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Julia": [".jl"],
"Lua": [".lua"],
"Makefile": ["Makefile"],
"Markdown": [".md", ".markdown"],
"PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"],
"Perl": [".pl", ".pm", ".pod", ".perl"],
"PowerShell": ['.ps1', '.psd1', '.psm1'],
"Python": [".py"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Scala": [".scala"],
"Shell": [".sh", ".bash", ".command", ".zsh"],
"TypeScript": [".ts", ".tsx"],
"TeX": [".tex"],
"Visual Basic": [".vb"]
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:

| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.70 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.70 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.70 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 |
## Dataset Creation
The dataset was created in two steps:
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_.
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)).
## Considerations for Using the Data
The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.
## Releases
You can load any older version of the dataset with the `revision` argument:
```Python
ds = load_dataset("codeparrot/github-code", revision="v1.0")
```
### v1.0
- Initial release of dataset
- The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_
### v1.1
- Fix missing Scala/TypeScript
- Fix deduplication issue with inconsistent Python `hash`
- The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
|
stochastic/random_streetview_images_pano_v0.0.2 | stochastic | 2022-10-14T02:05:40Z | 1,653 | 19 | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | 2022-10-05T19:39:59Z | 2 | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- mit
multilinguality:
- multilingual
pretty_name: panoramic, street view images of random places on Earth
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for panoramic street view images (v.0.0.2)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The random streetview images dataset are labeled, panoramic images scraped from randomstreetview.com. Each image shows a location
accessible by Google Streetview that has been roughly combined to provide ~360 degree view of a single location. The dataset was designed with the intent to geolocate an image purely based on its visual content.
### Supported Tasks and Leaderboards
None as of now!
### Languages
labels: Addresses are written in a combination of English and the official language of country they belong to.
images: There are some images with signage that can contain a language. Albeit, they are less common.
## Dataset Structure
For now, images exist exclusively in the `train` split and it is at the user's discretion to split the dataset how they please.
### Data Instances
For each instance, there is:
- timestamped file name: '{YYYYMMDD}_{address}.jpg`
- the image
- the country iso-alpha2 code
- the latitude
- the longitude
- the address
Fore more examples see the [dataset viewer](https://huggingface.co/datasets/stochastic/random_streetview_images_pano_v0.0.2/viewer/stochastic--random_streetview_images_pano_v0.0.2/train)
```
{
filename: '20221001_Jarše Slovenia_46.1069942_14.9378597.jpg'
country_iso_alpha2 : 'SI'
latitude: '46.028223'
longitude: '14.345106'
address: 'Jarše Slovenia_46.1069942_14.9378597'
}
```
### Data Fields
- country_iso_alpha2: a unique 2 character code for each country in the world following the ISO 3166 standard
- latitude: the angular distance of a place north or south of the earth's equator
- longitude: the angular distance of a place east or west of the standard meridian of the Earth
- address: the physical address written from most micro -> macro order (Street, Neighborhood, City, State, Country)
### Data Splits
'train': all images are currently contained in the 'train' split
## Dataset Creation
### Curation Rationale
Google StreetView Images [requires money per image scraped](https://developers.google.com/maps/documentation/streetview/usage-and-billing).
This dataset provides about 10,000 of those images for free.
### Source Data
#### Who are the source image producers?
Google Street View provide the raw image, this dataset combined various cuts of the images into a panoramic.
[More Information Needed]
### Annotations
#### Annotation process
The address, latitude, and longitude are all scraped from the API response. While portions of the data has been manually validated, the assurance in accuracy is based on the correctness of the API response.
### Personal and Sensitive Information
While Google Street View does blur out images and license plates to the best of their ability, it is not guaranteed as can been seen in some photos. Please review [Google's documentation](https://www.google.com/streetview/policy/) for more information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was designed after inspiration from playing the popular online game, [geoguessr.com[(geoguessr.com). We ask that users of this dataset consider if their geolocation based application will harm or jeopardize any fair institution or system.
### Discussion of Biases
Out of the ~195 countries that exists, this dataset only contains images from about 55 countries. Each country has an average of 175 photos, with some countries having slightly less.
The 55 countries are:
["ZA","KR","AR","BW","GR","SK","HK","NL","PE","AU","KH","LT","NZ","RO","MY","SG","AE","FR","ES","IT","IE","LV","IL","JP","CH","AD","CA","RU","NO","SE","PL","TW","CO","BD","HU","CL","IS","BG","GB","US","SI","BT","FI","BE","EE","SZ","UA","CZ","BR","DK","ID","MX","DE","HR","PT","TH"]
In terms of continental representation:
| continent | Number of Countries Represented |
|:-----------------------| -------------------------------:|
| Europe | 30 |
| Asia | 13 |
| South America | 5 |
| Africa | 3 |
| North America | 3 |
| Oceania | 2 |
This is not a fair representation of the world and its various climates, neighborhoods, and overall place. But it's a start!
### Other Known Limitations
As per [Google's policy](https://www.google.com/streetview/policy/): __"Street View imagery shows only what our cameras were able to see on the day that they passed by the location. Afterwards, it takes months to process them. This means that content you see could be anywhere from a few months to a few years old."__
### Licensing Information
MIT License
### Citation Information
### Contributions
Thanks to [@WinsonTruong](https://github.com/WinsonTruong) and [@
David Hrachovy](https://github.com/dayweek) for helping developing this dataset.
This dataset was developed for a Geolocator project with the aforementioned developers, [@samhita-alla](https://github.com/samhita-alla) and [@yiyixuxu](https://github.com/yiyixuxu).
Thanks to [FSDL](https://fullstackdeeplearning.com) for a wonderful class and online cohort. |
clips/mqa | clips | 2022-09-27T12:38:50Z | 767 | 52 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:original",
"language:ca",
"language:en",
"language:de",
"language:es",
"language:fr",
"language:ru",
"language:ja",
"language:it",
"language:zh",
"language:pt",
"language:nl",
"language:tr",
"language:pl",
"language:vi",
"language:ar",
"language:id",
"language:uk",
"language:ro",
"language:no",
"language:th",
"language:sv",
"language:el",
"language:fi",
"language:he",
"language:da",
"language:cs",
"language:ko",
"language:fa",
"language:hi",
"language:hu",
"language:sk",
"language:lt",
"language:et",
"language:hr",
"language:is",
"language:lv",
"language:ms",
"language:bg",
"language:sr",
"license:cc0-1.0",
"size_categories:100M<n<1B",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- ca
- en
- de
- es
- fr
- ru
- ja
- it
- zh
- pt
- nl
- tr
- pl
- vi
- ar
- id
- uk
- ro
- no
- th
- sv
- el
- fi
- he
- da
- cs
- ko
- fa
- hi
- hu
- sk
- lt
- et
- hr
- is
- lv
- ms
- bg
- sr
- ca
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: MQA - a Multilingual FAQ and CQA Dataset
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# MQA
MQA is a Multilingual corpus of Questions and Answers (MQA) parsed from the [Common Crawl](https://commoncrawl.org/). Questions are divided in two types: *Frequently Asked Questions (FAQ)* and *Community Question Answering (CQA)*.
```python
from datasets import load_dataset
all_data = load_dataset("clips/mqa", language="en")
{
"name": "the title of the question (if any)",
"text": "the body of the question (if any)",
"answers": [{
"text": "the text of the answer",
"is_accepted": "true|false"
}]
}
faq_data = load_dataset("clips/mqa", scope="faq", language="en")
cqa_data = load_dataset("clips/mqa", scope="cqa", language="en")
```
## Languages
We collected around **234M pairs** of questions and answers in **39 languages**. To download a language specific subset you need to specify the language key as configuration. See below for an example.
```python
load_dataset("clips/mqa", language="en") # replace "en" by any language listed below
```
| Language | FAQ | CQA |
|:-----------|------------:|-----------:|
| en | 174,696,414 | 14,082,180 |
| de | 17,796,992 | 1,094,606 |
| es | 14,967,582 | 845,836 |
| fr | 13,096,727 | 1,299,359 |
| ru | 12,435,022 | 1,715,131 |
| it | 6,850,573 | 455,027 |
| ja | 6,369,706 | 2,089,952 |
| zh | 5,940,796 | 579,596 |
| pt | 5,851,286 | 373,982 |
| nl | 4,882,511 | 503,376 |
| tr | 3,893,964 | 370,975 |
| pl | 3,766,531 | 70,559 |
| vi | 2,795,227 | 96,528 |
| id | 2,253,070 | 200,441 |
| ar | 2,211,795 | 805,661 |
| uk | 2,090,611 | 27,260 |
| el | 1,758,618 | 17,167 |
| no | 1,752,820 | 11,786 |
| sv | 1,733,582 | 20,024 |
| fi | 1,717,221 | 41,371 |
| ro | 1,689,471 | 93,222 |
| th | 1,685,463 | 73,204 |
| da | 1,554,581 | 16,398 |
| he | 1,422,449 | 88,435 |
| ko | 1,361,901 | 49,061 |
| cs | 1,224,312 | 143,863 |
| hu | 878,385 | 27,639 |
| fa | 787,420 | 118,805 |
| sk | 785,101 | 4,615 |
| lt | 672,105 | 301 |
| et | 547,208 | 441 |
| hi | 516,342 | 205,645 |
| hr | 458,958 | 11,677 |
| is | 437,748 | 37 |
| lv | 428,002 | 88 |
| ms | 230,568 | 7,460 |
| bg | 198,671 | 5,320 |
| sr | 110,270 | 3,980 |
| ca | 100,201 | 1,914 |
## FAQ vs. CQA
You can download the *Frequently Asked Questions* (FAQ) or the *Community Question Answering* (CQA) part of the dataset.
```python
faq = load_dataset("clips/mqa", scope="faq")
cqa = load_dataset("clips/mqa", scope="cqa")
all = load_dataset("clips/mqa", scope="all")
```
Although FAQ and CQA questions share the same structure, CQA questions can have multiple answers for a given questions, while FAQ questions have a single answer. FAQ questions typically only have a title (`name` key), while CQA have a title and a body (`name` and `text`).
## Nesting and Data Fields
You can specify three different nesting level: `question`, `page` and `domain`.
#### Question
```python
load_dataset("clips/mqa", level="question") # default
```
The default level is the question object:
- **name**: the title of the question(if any) in markdown format
- **text**: the body of the question (if any) in markdown format
- **answers**: a list of answers
- **text**: the title of the answer (if any) in markdown format
- **name**: the body of the answer in markdown format
- **is_accepted**: true if the answer is selected.
#### Page
This level returns a list of questions present on the same page. This is mostly useful for FAQs since CQAs already have one question per page.
```python
load_dataset("clips/mqa", level="page")
```
#### Domain
This level returns a list of pages present on the web domain. This is a good way to cope with FAQs duplication by sampling one page per domain at each epoch.
```python
load_dataset("clips/mqa", level="domain")
```
## Source Data
This section was adapted from the source data description of [OSCAR](https://huggingface.co/datasets/oscar#source-data)
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
To construct MQA, we used the WARC files of Common Crawl.
## People
This model was developed by [Maxime De Bruyn](https://maximedb.vercel.app), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
## Licensing Information
```
These data are released under this licensing scheme.
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
```
## Citation information
```
@inproceedings{de-bruyn-etal-2021-mfaq,
title = "{MFAQ}: a Multilingual {FAQ} Dataset",
author = "De Bruyn, Maxime and
Lotfi, Ehsan and
Buhmann, Jeska and
Daelemans, Walter",
booktitle = "Proceedings of the 3rd Workshop on Machine Reading for Question Answering",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrqa-1.1",
pages = "1--13",
}
``` |
Gustavosta/Stable-Diffusion-Prompts | Gustavosta | 2022-09-18T22:38:59Z | 9,707 | 483 | [
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-09-18T12:13:15Z | null | ---
license:
- unknown
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
source_datasets:
- original
---
# Stable Diffusion Dataset
This is a set of about 80,000 prompts filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare.
If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)".
If you want to see the model, go to: "[Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion)". |
Graphcore/wikipedia-bert-128 | Graphcore | 2022-09-07T14:42:32Z | 19,338 | 1 | [
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2022-03-02T23:29:22Z | null | ---
language:
- en
license:
- cc-by-sa-3.0
--- |
CodedotAI/code_clippy_github | CodedotAI | 2022-08-05T02:57:36Z | 5,898 | 16 | [
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2107.03374",
"region:us"
] | [
"sequence-modeling"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language: ["code"]
license:
- mit
multilinguality:
- multilingual
pretty_name: code-clippy-github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Code Clippy Github Dataset
## Dataset Description
The Code Clippy dataset consists of various public codebases from GitHub in 22 programming languages with 23 extensions totaling about 16 TB of data when uncompressed. The dataset was created from the public GitHub dataset on Google BigQuery.
### How to use it
This dataset is pretty large please use the streaming parameter from the `datasets` library as seen below:
```python
from datasets import load_dataset
ds = load_dataset("CodedotAI/code_clippy_github", streaming=True)
```
## Data Structure
### Data Instances
```python
{
'code_text': " a = mc^2",
'repo_name': 'NotEinstein',
'file_path': 'root/users/einstein.py',
'language': 'Python',
'license': 'isc',
'size': 2
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code_text|string|string of the source code contained in the code file|
|repo_name|string|name of the GitHub repository|
|file_path|string|path of the code file within the repository |
|language|string|programming language used in the file inferred by the file extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
Only a train split is provided in this dataset.
## Languages
The dataset contains 22 programming languages with over 23 extensions:
```python
{
"C": [".c"],
"C#": [".cs"],
"C++": [".cpp"],
"CSS": [".css"],
"Dart" : [".dart"],
"GO": [".go"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Jupyter Notebooks (Python)": [".ipynb"],
"Kotlin" : [".kt"],
"Lisp" : [".lisp"],
"Matlab" : [".m"],
"PHP": [".php"],
"Perl": [".pl"],
"Python": [".py"],
"R" : [".r"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Shell": [".sh"],
"Swift" : [".swift"],
"TypeScript": [".ts"],
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-2.0',
'gpl-3.0',
'bsd-3-clause',
'bsd-2-clause',
'unlicense',
'apacheagpl-3.0',
'lgpl-3.0',
'cc0-1.0',
'epl-1.0',
'lgpl-2.1',
'mpl-2.0',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset is about ~ 18 TB uncompressed. We are currently working on processing it and applying further filtering.
## Dataset Creation
The dataset was created in two steps:
1. Files with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery using the following query:
```sql
SELECT
f.id, f.repo_name, f.path, content.copies, content.size, content.content, lic.license
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` as content
ON
f.id = content.id
JOIN
`bigquery-public-data.github_repos.licenses` AS lic
ON
f.repo_name = lic.repo_name
WHERE
NOT content.binary
AND (
(f.path LIKE '%.py') OR (f.path LIKE '%.java') OR (f.path LIKE '%.js')
OR (f.path LIKE '%.html') OR (f.path LIKE '%.lisp') OR (f.path LIKE '%.sh')
OR (f.path LIKE '%.r') OR (f.path LIKE '%.pl') OR (f.path LIKE '%.css')
OR (f.path LIKE '%.sql') OR (f.path LIKE '%.c') OR (f.path LIKE '%.cpp')
OR (f.path LIKE '%.ts') OR (f.path LIKE '%.cs') OR (f.path LIKE '%.go')
OR (f.path LIKE '%.rs') OR (f.path LIKE '%.swift') OR (f.path LIKE '%.php')
OR (f.path LIKE '%.dart') OR (f.path LIKE '%.kt') OR (f.path LIKE '%.m')
OR (f.path LIKE '%.rb') OR (f.path LIKE '%.ipynb')
)
-- make sure we dont go above 1 megabyte
AND (content.size BETWEEN 1024 AND 1000000)
```
2. Currently, our CodedotAI team is working on adding additional filters and cleaning this dataset.
### Personal and Sensitive Information
Since this data was collected from public repositories, there exists potential for personal and sensitive information to be included in the data through developers accidentally or on purpose uploading their secret keys, passwords, API keys, emails, etc.
## Considerations for Using the Data
### Social Impact of Dataset
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discussion are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** A language model trained on large datasets such as this one for the task of autogenerating code may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using a language model trained on this dataset.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, any model trained on this dataset may generate vulnerable, buggy, or malicious code. In safety-critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, a model trained on this dataset may be used to generate malicious code on purpose in order to perform ransomware or other such attacks.
4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there have been little to no previous cases of such usages of licensed publicly available code. Therefore, any model trained on this dataset may be required to obey license terms that align with the software it was trained on such as GPL-3.0, which is why we purposefully put this dataset under the GPL-3.0 license. It is unclear the legal ramifications of using a language model trained on this dataset.
### v1.0
- The query was executed on _February 1, 2022, 12:15:59 AM EST_
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/about/). We would also like to thank [Dr. Razvan Bunescu](https://webpages.charlotte.edu/rbunescu/) and [The College of Computing and Informatics at UNC Charlotte](https://cci.charlotte.edu/) for their generous contributions to this project, specifically in funding the BigQuery and Google Cloud Storage costs. We would also like to thank the [codeparrot team at Hugging face](https://huggingface.co/codeparrot) for open sourcing their documentation on [github-code](https://huggingface.co/datasets/codeparrot/github-code) which we used for the readme in this dataset. For another similar dataset to this please check github-code! |
CALM/arwiki | CALM | 2022-08-01T16:37:23Z | 55,435 | 5 | [
"multilinguality:monolingual",
"language:ar",
"license:unknown",
"size_categories:10M<n<100M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2022-03-02T23:29:22Z | null | ---
pretty_name: Wikipedia Arabic dumps dataset.
language:
- ar
license:
- unknown
multilinguality:
- monolingual
---
# Arabic Wiki Dataset
## Dataset Summary
This dataset is extracted using [`wikiextractor`](https://github.com/attardi/wikiextractor) tool, from [Wikipedia Arabic pages](https://dumps.wikimedia.org/arwiki/).
## Supported Tasks and Leaderboards
Intended to train **Arabic** language models on MSA (Modern Standard Arabic).
## Dataset Structure
The dataset is structured into 2 folders:
- `arwiki_20211213_txt`: dataset is divided into subfolders each of which contains no more than 100 documents.
- `arwiki_20211213_txt_single`: all documents merged together in a single txt file.
## Dataset Statistics
#### Extracts from **December 13, 2021**:
| documents | vocabulary | words |
| --- | --- | --- |
| 1,136,455 | 5,446,560 | 175,566,016 |
## Usage
Load all dataset from the single txt file:
```python
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt_single/arwiki_20211213.txt')
# OR with stream
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt_single/arwiki_20211213.txt',
streaming=True)
```
Load a smaller subset from the individual txt files:
```python
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt')
# OR with stream
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt',
streaming=True)
``` |
strombergnlp/broad_twitter_corpus | strombergnlp | 2022-07-01T15:46:36Z | 437 | 5 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"token-classification"
] | 2022-04-28T09:58:09Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: broad-twitter-corpus
pretty_name: Broad Twitter Corpus
---
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus)
- **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111)
- **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details.
### Supported Tasks and Leaderboards
* Named Entity Recognition
* On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter)
### Languages
English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en`
## Dataset Structure
### Data Instances
Feature |Count
---|---:
Documents |9 551
Tokens |165 739
Person entities |5 271
Location entities |3 114
Organization entities |3 732
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of NER tags
- `id`: a `string` feature.
- `tokens`: a `list` of `strings`
- `ner_tags`: a `list` of class IDs (`int`s) representing the NER class:
```
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
```
### Data Splits
Section|Region|Collection period|Description|Annotators|Tweet count
---|---|---|---|---|---:
A | UK| 2012.01| General collection |Expert| 1000
B |UK |2012.01-02 |Non-directed tweets |Expert |2000
E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200
F |Stratified |2009-2014| Twitterati |Crowd & expert |2000
G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351
H |Non-UK| 2014 |General collection |Crowd & expert |2000
The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived.
**Test**: Section F
**Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance)
**Training**: everything else
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{derczynski2016broad,
title={Broad twitter corpus: A diverse named entity recognition resource},
author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian},
booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers},
pages={1169--1179},
year={2016}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
strombergnlp/named_timexes | strombergnlp | 2022-07-01T15:44:08Z | 27 | 2 | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"token-classification"
] | 2022-05-11T17:10:51Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Named Temporal Expressions dataset
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
---
# Dataset Card for named_timexes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://aclanthology.org/R13-1015/](https://aclanthology.org/R13-1015/)
- **Leaderboard:**
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is a dataset annotated for _named temporal expression_ chunks.
The
commonest temporal expressions typically
contain date and time words, like April or
hours. Research into recognising and interpreting these typical expressions is mature in many languages. However, there is
a class of expressions that are less typical,
very varied, and difficult to automatically
interpret. These indicate dates and times,
but are harder to detect because they often do not contain time words and are not
used frequently enough to appear in conventional temporally-annotated corpora –
for example *Michaelmas* or *Vasant Panchami*.
For more details see [Recognising and Interpreting Named Temporal Expressions](https://aclanthology.org/R13-1015.pdf)
### Supported Tasks and Leaderboards
* Task: Named Entity Recognition (temporal expressions)
### Languages
Englsih
## Dataset Structure
### Data Instances
### Data Fields
Each tweet contains an ID, a list of tokens, and a list of timex chunk flags.
- `id`: a `string` feature.
- `tokens`: a `list` of `strings` .
- `ntimex_tags`: a `list` of class IDs (`int`s) for whether a token is out-of-timex or in a timex chunk.
```
0: O
1: T
```
### Data Splits
Section|Token count
---|---:
train|87 050
test|30 010
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Attribution 4.0 International (CC BY 4.0)
### Citation Information
```
@inproceedings{brucato-etal-2013-recognising,
title = "Recognising and Interpreting Named Temporal Expressions",
author = "Brucato, Matteo and
Derczynski, Leon and
Llorens, Hector and
Bontcheva, Kalina and
Jensen, Christian S.",
booktitle = "Proceedings of the International Conference Recent Advances in Natural Language Processing {RANLP} 2013",
month = sep,
year = "2013",
address = "Hissar, Bulgaria",
publisher = "INCOMA Ltd. Shoumen, BULGARIA",
url = "https://aclanthology.org/R13-1015",
pages = "113--121",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
jet-universe/jetclass | jet-universe | 2022-05-27T19:00:45Z | 30 | 4 | [
"license:mit",
"arxiv:2202.03772",
"region:us"
] | [] | 2022-04-05T07:32:22Z | 1 | ---
license: mit
---
# Dataset Card for JetClass
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jet-universe/particle_transformer
- **Paper:** https://arxiv.org/abs/2202.03772
- **Leaderboard:**
- **Point of Contact:** [Huilin Qu](mailto:[email protected])
### Dataset Summary
JetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:
* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the
LHC.
* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.
Jets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance
parameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use the JetClass dataset, please cite:
```
@article{Qu:2022mxj,
author = "Qu, Huilin and Li, Congqiao and Qian, Sitian",
title = "{Particle Transformer for Jet Tagging}",
eprint = "2202.03772",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
month = "2",
year = "2022"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
|
DMetaSoul/chinese-semantic-textual-similarity | DMetaSoul | 2022-04-02T10:38:47Z | 37 | 17 | [
"license:apache-2.0",
"region:us"
] | [] | 2022-04-02T10:10:43Z | 1 | ---
license: apache-2.0
---
为了对 like-BERT 预训练模型进行 fine-tune 调优和评测以得到更好的文本表征模,对业界开源的语义相似(STS)、自然语言推理(NLI)、问题匹配(QMC)以及相关性等数据集进行了搜集整理,具体介绍如下:
| 类型 | 数据集 | 简介 | 规模 |
| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
| **通用领域** | [OCNLI](https://www.cluebenchmarks.com/introduce.html) | 原生中文自然语言推理数据集,是第一个非翻译的、使用原生汉语的大型中文自然语言推理数据集。OCNLI为中文语言理解基准测评(CLUE)的一部分。 | **Train**: 50437, **Dev**: 2950 |
| | [CMNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 XNLI 和 MNLI,曾经是中文语言理解基准测评(CLUE)的一部分,现在被 OCNLI 取代。 | **Train**: 391783, **Dev**: 12241 |
| | [CSNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 SNLI。 | **Train**: 545833, **Dev**: 9314, **Test**: 9176 |
| | [STS-B-Chinese](https://github.com/pluto-junzeng/CNSD) | 翻译自英文语义相似数据集 STSbenchmark。 | **Train**: 5231, **Dev**: 1458, **Test**: 1361 |
| | [PAWS-X](https://www.luge.ai/#/luge/dataDetail?id=16) | 释义(含义)匹配数据集,特点是具有高度重叠词汇,重点考察模型对句法结构的理解能力。 | **Train**: 49401, **Dev**: 2000, **Test**: 2000 |
| | [PKU-Paraphrase-Bank](https://github.com/pkucoli/PKU-Paraphrase-Bank/) | 中文语句复述数据集,也即一句话换种方式描述但语义保持一致。 | 共509832个语句对 |
| **问题匹配** | [LCQMC](https://www.luge.ai/#/luge/dataDetail?id=14) | 百度知道领域的中文问题匹配大规模数据集,该数据集从百度知道不同领域的用户问题中抽取构建数据。 | **Train**: 238766, **Dev**: 8802, **Test**: 12500 |
| | [BQCorpus](https://www.luge.ai/#/luge/dataDetail?id=15) | 银行金融领域的问题匹配数据,包括了从一年的线上银行系统日志里抽取的问题pair对,是目前最大的银行领域问题匹配数据。 | **Train**: 100000, **Dev**: 10000, **Test**: 10000 |
| | [AFQMC](https://www.cluebenchmarks.com/introduce.html) | 蚂蚁金服真实金融业务场景中的问题匹配数据集(已脱敏),是中文语言理解基准测评(CLUE)的一部分。 | **Train**: 34334, **Dev**: 4316 |
| | [DuQM](https://www.luge.ai/#/luge/dataDetail?id=27) | 问题匹配评测数据集(label没有公开),属于百度大规模阅读理解数据集(DuReader)的[一部分](https://github.com/baidu/DuReader/tree/master/DuQM)。 | 共50000个语句对 |
| **对话和搜索** | [BUSTM: OPPO-xiaobu](https://www.luge.ai/#/luge/dataDetail?id=28) | 通过对闲聊、智能客服、影音娱乐、信息查询等多领域真实用户交互语料进行用户信息脱敏、相似度筛选处理得到,该对话匹配(Dialogue Short Text Matching)数据集主要特点是文本较短、非常口语化、存在文本高度相似而语义不同的难例。 | **Train**: 167173, **Dev**: 10000 |
| | [QBQTC](https://github.com/CLUEbenchmark/QBQTC) | QQ浏览器搜索相关性数据集(QBQTC,QQ Browser Query Title Corpus),是QQ浏览器搜索引擎目前针对大搜场景构建的一个融合了相关性、权威性、内容质量、 时效性等维度标注的学习排序(LTR)数据集,广泛应用在搜索引擎业务场景中。(相关性的含义:0,相关程度差;1,有一定相关性;2,非常相关。) | **Train**: 180000, **Dev**: 20000, **Test**: 5000 |
*以上数据集主要收集整理自[CLUE](https://www.cluebenchmarks.com/introduce.html)(中文语言理解基准评测数据集)、[SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) (整合许多开源文本相似数据集)、[百度千言](https://www.luge.ai/#/)的文本相似度等数据集。*
根据以上收集的数据集构建如下**评测 benchmark**:
| Name | Size | Type |
| ---------------------- | ----- | ------------- |
| **Chinese-STS-B-dev** | 1458 | label=0.0~1.0 |
| **Chinese-STS-B-test** | 1361 | label=0.0~1.0 |
| **afqmc-dev** | 4316 | label=0,1 |
| **lcqmc-dev** | 8802 | label=0,1 |
| **bqcorpus-dev** | 10000 | label=0,1 |
| **pawsx_dev** | 2000 | label=0,1 |
| **oppo-xiaobu-dev** | 10000 | label=0,1 |
*TODO:目前收集的数据集不论是数量还是多样性都需要进一步扩充以更真实的反映表征模型的效果*
|
vlsb/autotrain-data-security-texts-classification-distilroberta | vlsb | 2022-03-30T20:48:56Z | 15 | 4 | [
"task_categories:text-classification",
"region:us"
] | [
"text-classification"
] | 2022-03-30T20:48:23Z | 1 | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: security-texts-classification-distilroberta
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Netgear launches Bug Bounty Program for Hacker; Offering up to $15,000 in Rewards It might be the ea[...]",
"target": 0
},
{
"text": "Popular Malware Families Using 'Process Doppelg\u00e4nging' to Evade Detection The fileless code injectio[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['irrelevant', 'relevant'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 780 |
| valid | 196 |
|
transformersbook/codeparrot | transformersbook | 2022-02-05T16:15:40Z | 355 | 57 | [
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"python",
"code"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
tags:
- python
- code
---
# CodeParrot 🦜 Dataset
## What is it?
This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
## Creation
It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary
AND ((f.path LIKE '%.py')
AND (c.size BETWEEN 1024 AND 1048575))
```
## Duplication
Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See [codeparrot-clean](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for a deduplicated version of this dataset. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.