Datasets:
Making README more robust and verbose
#4
by
reach-vb
HF staff
- opened
README.md
CHANGED
@@ -366,7 +366,7 @@ Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) pag
|
|
366 |
### Supported Tasks and Leaderboards
|
367 |
|
368 |
The results for models trained on the Common Voice datasets are available via the
|
369 |
-
[🤗
|
370 |
|
371 |
### Languages
|
372 |
|
@@ -374,6 +374,55 @@ The results for models trained on the Common Voice datasets are available via th
|
|
374 |
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
|
375 |
```
|
376 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
377 |
## Dataset Structure
|
378 |
|
379 |
### Data Instances
|
|
|
366 |
### Supported Tasks and Leaderboards
|
367 |
|
368 |
The results for models trained on the Common Voice datasets are available via the
|
369 |
+
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
|
370 |
|
371 |
### Languages
|
372 |
|
|
|
374 |
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
|
375 |
```
|
376 |
|
377 |
+
### How to use
|
378 |
+
|
379 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
380 |
+
|
381 |
+
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
|
382 |
+
```python
|
383 |
+
from datasets import load_dataset
|
384 |
+
|
385 |
+
cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
386 |
+
```
|
387 |
+
|
388 |
+
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
389 |
+
```python
|
390 |
+
from datasets import load_dataset
|
391 |
+
|
392 |
+
cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train", streaming=True)
|
393 |
+
|
394 |
+
print(next(iter(cv_11)))
|
395 |
+
```
|
396 |
+
|
397 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
|
398 |
+
|
399 |
+
Local:
|
400 |
+
|
401 |
+
```python
|
402 |
+
from datasets import load_dataset
|
403 |
+
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
404 |
+
|
405 |
+
cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
406 |
+
batch_sampler = BatchSampler(RandomSampler(cv_11), batch_size=32, drop_last=False)
|
407 |
+
dataloader = DataLoader(cv_11, batch_sampler=batch_sampler)
|
408 |
+
```
|
409 |
+
|
410 |
+
Streaming:
|
411 |
+
|
412 |
+
```python
|
413 |
+
from datasets import load_dataset
|
414 |
+
from torch.utils.data import DataLoader
|
415 |
+
|
416 |
+
cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
417 |
+
dataloader = DataLoader(cv_11, batch_size=32)
|
418 |
+
```
|
419 |
+
|
420 |
+
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
|
421 |
+
|
422 |
+
### Example scripts
|
423 |
+
|
424 |
+
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 11 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
|
425 |
+
|
426 |
## Dataset Structure
|
427 |
|
428 |
### Data Instances
|