Datasets:
up
Browse files
README.md
CHANGED
@@ -376,9 +376,9 @@ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basqu
|
|
376 |
|
377 |
### How to use
|
378 |
|
379 |
-
The `datasets` library allows you to load and pre-process your dataset in pure Python at scale. No need to rely on decades old hacky shell scripts and C/C++ pre-processing scripts anymore. The minimalistic API
|
380 |
|
381 |
-
The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
382 |
```python
|
383 |
from datasets import load_dataset
|
384 |
|
@@ -394,7 +394,10 @@ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train"
|
|
394 |
print(next(iter(cv_11)))
|
395 |
```
|
396 |
|
397 |
-
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your local/
|
|
|
|
|
|
|
398 |
```python
|
399 |
from datasets import load_dataset
|
400 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
@@ -404,7 +407,8 @@ batch_sampler = BatchSampler(RandomSampler(cv_11), batch_size=32, drop_last=Fals
|
|
404 |
dataloader = DataLoader(cv_11, batch_sampler=batch_sampler)
|
405 |
```
|
406 |
|
407 |
-
|
|
|
408 |
```python
|
409 |
from datasets import load_dataset
|
410 |
from torch.utils.data import DataLoader
|
|
|
376 |
|
377 |
### How to use
|
378 |
|
379 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. No need to rely on decades old, hacky shell scripts and C/C++ pre-processing scripts anymore. The minimalistic API ensures that you can plug-and-play this dataset in your existing Machine Learning workflow, with just a few lines of code.
|
380 |
|
381 |
+
The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. For example, to download the Hindi split, simply specify the corresponding language config name (i.e., "hi" for Hindi):
|
382 |
```python
|
383 |
from datasets import load_dataset
|
384 |
|
|
|
394 |
print(next(iter(cv_11)))
|
395 |
```
|
396 |
|
397 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
|
398 |
+
|
399 |
+
Local:
|
400 |
+
|
401 |
```python
|
402 |
from datasets import load_dataset
|
403 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
|
|
407 |
dataloader = DataLoader(cv_11, batch_sampler=batch_sampler)
|
408 |
```
|
409 |
|
410 |
+
Streaming:
|
411 |
+
|
412 |
```python
|
413 |
from datasets import load_dataset
|
414 |
from torch.utils.data import DataLoader
|