alexkueck commited on
Commit
ea48328
·
1 Parent(s): 508e22f

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -311
README.md DELETED
@@ -1,311 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - cc-by-sa-3.0
10
- - gfdl
11
- multilinguality:
12
- - monolingual
13
- paperswithcode_id: wikitext-2
14
- pretty_name: WikiText
15
- size_categories:
16
- - 1M<n<10M
17
- source_datasets:
18
- - original
19
- task_categories:
20
- - text-generation
21
- - fill-mask
22
- task_ids:
23
- - language-modeling
24
- - masked-language-modeling
25
- dataset_info:
26
- - config_name: wikitext-103-v1
27
- features:
28
- - name: text
29
- dtype: string
30
- splits:
31
- - name: test
32
- num_bytes: 1295579
33
- num_examples: 4358
34
- - name: train
35
- num_bytes: 545142639
36
- num_examples: 1801350
37
- - name: validation
38
- num_bytes: 1154755
39
- num_examples: 3760
40
- download_size: 190229076
41
- dataset_size: 547592973
42
- - config_name: wikitext-2-v1
43
- features:
44
- - name: text
45
- dtype: string
46
- splits:
47
- - name: test
48
- num_bytes: 1270951
49
- num_examples: 4358
50
- - name: train
51
- num_bytes: 10918134
52
- num_examples: 36718
53
- - name: validation
54
- num_bytes: 1134127
55
- num_examples: 3760
56
- download_size: 4475746
57
- dataset_size: 13323212
58
- - config_name: wikitext-103-raw-v1
59
- features:
60
- - name: text
61
- dtype: string
62
- splits:
63
- - name: test
64
- num_bytes: 1305092
65
- num_examples: 4358
66
- - name: train
67
- num_bytes: 546501673
68
- num_examples: 1801350
69
- - name: validation
70
- num_bytes: 1159292
71
- num_examples: 3760
72
- download_size: 191984949
73
- dataset_size: 548966057
74
- - config_name: wikitext-2-raw-v1
75
- features:
76
- - name: text
77
- dtype: string
78
- splits:
79
- - name: test
80
- num_bytes: 1305092
81
- num_examples: 4358
82
- - name: train
83
- num_bytes: 11061733
84
- num_examples: 36718
85
- - name: validation
86
- num_bytes: 1159292
87
- num_examples: 3760
88
- download_size: 4721645
89
- dataset_size: 13526117
90
- ---
91
-
92
- # Dataset Card for "wikitext"
93
-
94
- ## Table of Contents
95
- - [Dataset Description](#dataset-description)
96
- - [Dataset Summary](#dataset-summary)
97
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
98
- - [Languages](#languages)
99
- - [Dataset Structure](#dataset-structure)
100
- - [Data Instances](#data-instances)
101
- - [Data Fields](#data-fields)
102
- - [Data Splits](#data-splits)
103
- - [Dataset Creation](#dataset-creation)
104
- - [Curation Rationale](#curation-rationale)
105
- - [Source Data](#source-data)
106
- - [Annotations](#annotations)
107
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
108
- - [Considerations for Using the Data](#considerations-for-using-the-data)
109
- - [Social Impact of Dataset](#social-impact-of-dataset)
110
- - [Discussion of Biases](#discussion-of-biases)
111
- - [Other Known Limitations](#other-known-limitations)
112
- - [Additional Information](#additional-information)
113
- - [Dataset Curators](#dataset-curators)
114
- - [Licensing Information](#licensing-information)
115
- - [Citation Information](#citation-information)
116
- - [Contributions](#contributions)
117
-
118
- ## Dataset Description
119
-
120
- - **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
121
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
- - **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843)
123
- - **Point of Contact:** [Stephen Merity](mailto:[email protected])
124
- - **Size of downloaded dataset files:** 391.41 MB
125
- - **Size of the generated dataset:** 1.12 GB
126
- - **Total amount of disk used:** 1.52 GB
127
-
128
- ### Dataset Summary
129
-
130
- The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
131
- Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
132
-
133
- Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
134
- 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation
135
- and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models
136
- that can take advantage of long term dependencies.
137
-
138
- Each subset comes in two different variants:
139
- - Raw (for character level work) contain the raw tokens, before the addition of the <unk> (unknown) tokens.
140
- - Non-raw (for word level work) contain only the tokens in their vocabulary (wiki.train.tokens, wiki.valid.tokens, and wiki.test.tokens).
141
- The out-of-vocabulary tokens have been replaced with the the <unk> token.
142
-
143
-
144
- ### Supported Tasks and Leaderboards
145
-
146
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
-
148
- ### Languages
149
-
150
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
-
152
- ## Dataset Structure
153
-
154
- ### Data Instances
155
-
156
- #### wikitext-103-raw-v1
157
-
158
- - **Size of downloaded dataset files:** 191.98 MB
159
- - **Size of the generated dataset:** 549.42 MB
160
- - **Total amount of disk used:** 741.41 MB
161
-
162
- An example of 'validation' looks as follows.
163
- ```
164
- This example was too long and was cropped:
165
-
166
- {
167
- "text": "\" The gold dollar or gold one @-@ dollar piece was a coin struck as a regular issue by the United States Bureau of the Mint from..."
168
- }
169
- ```
170
-
171
- #### wikitext-103-v1
172
-
173
- - **Size of downloaded dataset files:** 190.23 MB
174
- - **Size of the generated dataset:** 548.05 MB
175
- - **Total amount of disk used:** 738.27 MB
176
-
177
- An example of 'train' looks as follows.
178
- ```
179
- This example was too long and was cropped:
180
-
181
- {
182
- "text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
183
- }
184
- ```
185
-
186
- #### wikitext-2-raw-v1
187
-
188
- - **Size of downloaded dataset files:** 4.72 MB
189
- - **Size of the generated dataset:** 13.54 MB
190
- - **Total amount of disk used:** 18.26 MB
191
-
192
- An example of 'train' looks as follows.
193
- ```
194
- This example was too long and was cropped:
195
-
196
- {
197
- "text": "\" The Sinclair Scientific Programmable was introduced in 1975 , with the same case as the Sinclair Oxford . It was larger than t..."
198
- }
199
- ```
200
-
201
- #### wikitext-2-v1
202
-
203
- - **Size of downloaded dataset files:** 4.48 MB
204
- - **Size of the generated dataset:** 13.34 MB
205
- - **Total amount of disk used:** 17.82 MB
206
-
207
- An example of 'train' looks as follows.
208
- ```
209
- This example was too long and was cropped:
210
-
211
- {
212
- "text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
213
- }
214
- ```
215
-
216
- ### Data Fields
217
-
218
- The data fields are the same among all splits.
219
-
220
- #### wikitext-103-raw-v1
221
- - `text`: a `string` feature.
222
-
223
- #### wikitext-103-v1
224
- - `text`: a `string` feature.
225
-
226
- #### wikitext-2-raw-v1
227
- - `text`: a `string` feature.
228
-
229
- #### wikitext-2-v1
230
- - `text`: a `string` feature.
231
-
232
- ### Data Splits
233
-
234
- | name | train |validation|test|
235
- |-------------------|------:|---------:|---:|
236
- |wikitext-103-raw-v1|1801350| 3760|4358|
237
- |wikitext-103-v1 |1801350| 3760|4358|
238
- |wikitext-2-raw-v1 | 36718| 3760|4358|
239
- |wikitext-2-v1 | 36718| 3760|4358|
240
-
241
- ## Dataset Creation
242
-
243
- ### Curation Rationale
244
-
245
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
-
247
- ### Source Data
248
-
249
- #### Initial Data Collection and Normalization
250
-
251
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
-
253
- #### Who are the source language producers?
254
-
255
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
-
257
- ### Annotations
258
-
259
- #### Annotation process
260
-
261
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
-
263
- #### Who are the annotators?
264
-
265
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
266
-
267
- ### Personal and Sensitive Information
268
-
269
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
270
-
271
- ## Considerations for Using the Data
272
-
273
- ### Social Impact of Dataset
274
-
275
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
276
-
277
- ### Discussion of Biases
278
-
279
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
280
-
281
- ### Other Known Limitations
282
-
283
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
284
-
285
- ## Additional Information
286
-
287
- ### Dataset Curators
288
-
289
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
290
-
291
- ### Licensing Information
292
-
293
- The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
294
-
295
- ### Citation Information
296
-
297
- ```
298
- @misc{merity2016pointer,
299
- title={Pointer Sentinel Mixture Models},
300
- author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
301
- year={2016},
302
- eprint={1609.07843},
303
- archivePrefix={arXiv},
304
- primaryClass={cs.CL}
305
- }
306
- ```
307
-
308
-
309
- ### Contributions
310
-
311
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.