html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/4124 | Image decoding often fails when transforming Image datasets | @albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object. | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | 40 | Image decoding often fails when transforming Image datasets
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image d... | [
-0.0782001764,
-0.0185160097,
-0.1789414287,
0.1256359518,
0.1442411542,
-0.1011210158,
0.061807964,
0.2104717195,
-0.1569966078,
0.1313689649,
0.1810033619,
0.5930034518,
0.199265331,
-0.1440647393,
-0.1841397285,
-0.1133295894,
0.1441317201,
0.3657739758,
-0.1194041669,
-0.15... |
https://github.com/huggingface/datasets/issues/4124 | Image decoding often fails when transforming Image datasets | @albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails.
I suppose to run `map` on any dataset all the e... | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | 71 | Image decoding often fails when transforming Image datasets
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image d... | [
-0.0782001764,
-0.0185160097,
-0.1789414287,
0.1256359518,
0.1442411542,
-0.1011210158,
0.061807964,
0.2104717195,
-0.1569966078,
0.1313689649,
0.1810033619,
0.5930034518,
0.199265331,
-0.1440647393,
-0.1841397285,
-0.1133295894,
0.1441317201,
0.3657739758,
-0.1194041669,
-0.15... |
https://github.com/huggingface/datasets/issues/4124 | Image decoding often fails when transforming Image datasets | Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:
```
pip install git+https://github.com/hug... | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | 53 | Image decoding often fails when transforming Image datasets
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image d... | [
-0.0782001764,
-0.0185160097,
-0.1789414287,
0.1256359518,
0.1442411542,
-0.1011210158,
0.061807964,
0.2104717195,
-0.1569966078,
0.1313689649,
0.1810033619,
0.5930034518,
0.199265331,
-0.1440647393,
-0.1841397285,
-0.1133295894,
0.1441317201,
0.3657739758,
-0.1194041669,
-0.15... |
https://github.com/huggingface/datasets/issues/4124 | Image decoding often fails when transforming Image datasets | @mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave.
Looking forward to exploring Hugging Face more and even contributing 😃.
```bash
$ conda list | grep datasets
datasets 2.0.1.dev0 pypi_0 pypi
`... | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | 245 | Image decoding often fails when transforming Image datasets
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image d... | [
-0.0782001764,
-0.0185160097,
-0.1789414287,
0.1256359518,
0.1442411542,
-0.1011210158,
0.061807964,
0.2104717195,
-0.1569966078,
0.1313689649,
0.1810033619,
0.5930034518,
0.199265331,
-0.1440647393,
-0.1841397285,
-0.1133295894,
0.1441317201,
0.3657739758,
-0.1194041669,
-0.15... |
https://github.com/huggingface/datasets/issues/4123 | Building C4 takes forever | Hi @StellaAthena, thanks for reporting.
Please note, that our `datasets` library performs several operations in order to load a dataset, among them:
- it downloads all the required files: for C4 "en", 378.69 GB of JSON GZIPped files
- it parses their content to generate the dataset
- it caches the dataset in an A... | ## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load... | 274 | Building C4 takes forever
## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
`... | [
-0.4080928266,
0.0847902074,
-0.0823979005,
0.5022711754,
0.2178034484,
0.1781255305,
0.2599093914,
0.3984819949,
-0.0510366745,
0.1110166311,
-0.0843762383,
0.1397249699,
-0.0636631325,
0.3419267535,
0.0822247565,
-0.0137421619,
0.0966513529,
0.1611655653,
-0.0375896208,
0.044... |
https://github.com/huggingface/datasets/issues/4122 | medical_dialog zh has very slow _generate_examples | Hi @nbroad1881, thanks for reporting.
Let me have a look to try to improve its performance. | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from... | 16 | medical_dialog zh has very slow _generate_examples
## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the b... | [
-0.3489077985,
0.1591594219,
-0.0121252947,
0.1469065398,
-0.0144645423,
0.1854942292,
0.7514628172,
0.4728639722,
0.0901344344,
0.1866066754,
-0.0348469913,
0.0402097553,
-0.1593004316,
0.2758603394,
0.0356320478,
-0.3231911659,
0.0443783104,
0.0977177694,
0.17513825,
-0.19789... |
https://github.com/huggingface/datasets/issues/4122 | medical_dialog zh has very slow _generate_examples | Thanks @nbroad1881 for reporting! I don't recall it taking so long. I will also have a look at this.
@albertvillanova please let me know if I am doing something unnecessary or time consuming. | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from... | 33 | medical_dialog zh has very slow _generate_examples
## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the b... | [
-0.3489077985,
0.1591594219,
-0.0121252947,
0.1469065398,
-0.0144645423,
0.1854942292,
0.7514628172,
0.4728639722,
0.0901344344,
0.1866066754,
-0.0348469913,
0.0402097553,
-0.1593004316,
0.2758603394,
0.0356320478,
-0.3231911659,
0.0443783104,
0.0977177694,
0.17513825,
-0.19789... |
https://github.com/huggingface/datasets/issues/4122 | medical_dialog zh has very slow _generate_examples | Hi @nbroad1881 and @vrindaprabhu,
As a workaround for the performance of the parsing of the raw data files (this could be addressed in a subsequent PR), I have found that there are also processed data files, that do not require parsing. I have added these as new configurations `processed.en` and `processed.zh`:
```... | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from... | 57 | medical_dialog zh has very slow _generate_examples
## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the b... | [
-0.3489077985,
0.1591594219,
-0.0121252947,
0.1469065398,
-0.0144645423,
0.1854942292,
0.7514628172,
0.4728639722,
0.0901344344,
0.1866066754,
-0.0348469913,
0.0402097553,
-0.1593004316,
0.2758603394,
0.0356320478,
-0.3231911659,
0.0443783104,
0.0977177694,
0.17513825,
-0.19789... |
https://github.com/huggingface/datasets/issues/4117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | Hi @arymbe, thanks for reporting.
Unfortunately, I'm not able to reproduce your problem.
Could you please write the complete stack trace? That way we will be able to see which package originates the exception. | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metr... | 34 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from... | [
-0.1681801081,
-0.4443613589,
-0.0345687047,
0.3718606532,
0.2857320309,
0.1390290558,
0.1513350904,
0.3024879396,
0.5123541951,
0.1311884671,
-0.1421200633,
0.3014177978,
-0.0827258974,
0.1413985789,
0.0469236784,
-0.2954945266,
0.1305672079,
0.1796792895,
-0.0378473476,
-0.21... |
https://github.com/huggingface/datasets/issues/4117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | Hello, thank you for your fast replied. this is the complete error that I got
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
---------------------------------------------------------------------------
At... | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metr... | 169 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from... | [
-0.2521853447,
-0.4301819503,
-0.030492207,
0.4592759311,
0.2463348061,
0.1830718368,
0.1426952034,
0.2809119225,
0.4128460586,
0.1086036935,
-0.2213772088,
0.3965074122,
-0.0413650498,
-0.001482135,
-0.0518701151,
-0.288087517,
0.1022913903,
0.1803317666,
-0.0500196666,
-0.240... |
https://github.com/huggingface/datasets/issues/4117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.
Maybe you have a problem with your installed `huggingface_hub`...
Could you please try to update it?
```shell
pip install -U huggingface_hub
``` | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metr... | 38 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from... | [
-0.2230071276,
-0.5056049824,
-0.0650632009,
0.3610424995,
0.2549870014,
0.114971526,
0.118224062,
0.326862216,
0.4500231743,
0.2066401541,
-0.2411005646,
0.3153876364,
-0.0733025968,
0.1169203669,
0.0166848246,
-0.2889391184,
0.0901743099,
0.1637155861,
-0.015950283,
-0.222828... |
https://github.com/huggingface/datasets/issues/4117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :) | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metr... | 51 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from... | [
-0.21762155,
-0.4780932367,
-0.0366823263,
0.4264870584,
0.2414596826,
0.071015045,
0.1331358254,
0.3361755311,
0.394202143,
0.176005289,
-0.2571629882,
0.3224829733,
-0.0744063407,
0.1754895449,
-0.0037262493,
-0.2850347459,
0.1679325253,
0.1340253055,
0.0111516062,
-0.1307843... |
https://github.com/huggingface/datasets/issues/4117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | I'm sorry I can't reproduce your problem.
Maybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:
- Python venv: https://docs.python.org/3/library/venv.html
- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/... | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metr... | 43 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from... | [
-0.2146630883,
-0.3654770255,
-0.0761521682,
0.2864287198,
0.3152766526,
0.0644777939,
0.1203804985,
0.375890255,
0.3959611356,
0.0997086391,
-0.3167545497,
0.3548259735,
-0.0315835662,
0.1763648093,
0.0351333432,
-0.2626277208,
0.0786467418,
0.247195974,
-0.1226428002,
-0.2426... |
https://github.com/huggingface/datasets/issues/4115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default.
CC @mariosasko | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t... | 20 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
**Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate ima... | [
0.0150302704,
0.3754785955,
-0.0406380817,
0.2024213523,
0.007796328,
-0.0064331195,
0.3350971341,
0.286812067,
0.1231297329,
0.3202601671,
-0.0017326373,
0.2950354218,
-0.2400576025,
0.2837023735,
-0.207695961,
0.1347949356,
-0.1733373255,
0.2911928296,
0.1295756698,
0.0272678... |
https://github.com/huggingface/datasets/issues/4115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t... | 25 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
**Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate ima... | [
-0.0425490439,
0.3998558521,
-0.0386111178,
0.2750570178,
-0.0083971899,
-0.0364803039,
0.3597345948,
0.2762027085,
0.2106353492,
0.3438732028,
0.0210268758,
0.3226283193,
-0.2770065367,
0.30391711,
-0.1992335618,
0.1747257262,
-0.1685940772,
0.3513826728,
0.1304190606,
0.10867... |
https://github.com/huggingface/datasets/issues/4115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | I think they should always ignore them actually ! Not sure if adding a flag would be helpful | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t... | 18 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
**Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate ima... | [
0.0336523503,
0.3373185098,
-0.0412329249,
0.1949006319,
0.0174028464,
-0.0081473133,
0.3061302304,
0.2995588481,
0.1251519471,
0.2879789174,
0.0459555089,
0.3039180636,
-0.2466278821,
0.2990991771,
-0.2030142397,
0.1351258755,
-0.1642226875,
0.2829219103,
0.1492476314,
0.03627... |
https://github.com/huggingface/datasets/issues/4115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | @lhoestq But what if the user explicitly requests those files via regex?
`glob.glob` ignores hidden files (files starting with ".") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo? | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t... | 53 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
**Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate ima... | [
-0.0018105961,
0.2933646739,
-0.0233427528,
0.2066629827,
-0.0513592623,
-0.156910792,
0.2393537164,
0.191352874,
0.1357236803,
0.3719963133,
0.0361122265,
0.1524454653,
-0.3634715378,
0.4241342247,
-0.2191289514,
0.2104230374,
-0.1450534463,
0.2955017388,
0.0990798771,
0.04814... |
https://github.com/huggingface/datasets/issues/4115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | > @lhoestq But what if the user explicitly requests those files via regex?
Usually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.
> glob.glob ignores hidden files... | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t... | 145 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
**Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate ima... | [
-0.0141534051,
0.3498776257,
-0.0326180309,
0.1782968044,
-0.0223031379,
-0.1694779247,
0.2864607573,
0.1728476435,
0.1215644404,
0.3209646344,
0.1033284068,
0.1665270627,
-0.3905254006,
0.4313723743,
-0.3181626201,
0.2184353024,
-0.151723206,
0.2545992136,
0.0956850201,
0.0516... |
https://github.com/huggingface/datasets/issues/4114 | Allow downloading just some columns of a dataset | In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download... | 38 | Allow downloading just some columns of a dataset
**Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe ... | [
-0.199664548,
-0.0586735867,
-0.1487237811,
0.0414170772,
0.2150535285,
0.3403987885,
-0.0032916011,
0.4748856723,
0.3685281277,
0.4119487405,
-0.1138406619,
0.0341660902,
-0.1121429354,
0.5283209682,
-0.0172497816,
-0.2846941948,
-0.1776853353,
0.4532847404,
-0.072818473,
-0.0... |
https://github.com/huggingface/datasets/issues/4114 | Allow downloading just some columns of a dataset | Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought. | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download... | 31 | Allow downloading just some columns of a dataset
**Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe ... | [
-0.1404873878,
-0.096265927,
-0.1297087371,
0.0682217479,
0.1743096411,
0.3797702789,
0.1323051602,
0.4191699922,
0.3833302557,
0.3452168107,
-0.1784936637,
0.0700726807,
-0.109138526,
0.4753883183,
-0.0048137791,
-0.2822029591,
-0.1880962402,
0.4410910904,
-0.084199816,
-0.005... |
https://github.com/huggingface/datasets/issues/4112 | ImageFolder with Grayscale images dataset | Hi! Replacing:
```python
transformed_dataset = dataset.with_transform(transforms)
transformed_dataset.set_format(type="torch", device="cuda")
```
with:
```python
def transform_func(examples):
examples["image"] = [transforms(img).to("cuda") for img in examples["image"]]
return examples
transformed_... | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in D... | 89 | ImageFolder with Grayscale images dataset
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash... | [
-0.3947274983,
0.2771763206,
0.0049247658,
0.3311274052,
0.3124427795,
-0.0720552802,
0.6099219322,
0.171725899,
0.0952751935,
0.1119719371,
0.029650867,
0.3789330423,
-0.4122756422,
-0.0405048914,
-0.3064041734,
-0.2962744832,
-0.2522681355,
0.236525178,
-0.2202251405,
0.10406... |
https://github.com/huggingface/datasets/issues/4112 | ImageFolder with Grayscale images dataset | Ok thanks a lot for the code snippet!
I love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force me to have the images on my local machine.
I don't know how to speed up t... | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in D... | 68 | ImageFolder with Grayscale images dataset
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash... | [
-0.3947274983,
0.2771763206,
0.0049247658,
0.3311274052,
0.3124427795,
-0.0720552802,
0.6099219322,
0.171725899,
0.0952751935,
0.1119719371,
0.029650867,
0.3789330423,
-0.4122756422,
-0.0405048914,
-0.3064041734,
-0.2962744832,
-0.2522681355,
0.236525178,
-0.2202251405,
0.10406... |
https://github.com/huggingface/datasets/issues/4112 | ImageFolder with Grayscale images dataset | You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior. | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in D... | 31 | ImageFolder with Grayscale images dataset
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash... | [
-0.3947274983,
0.2771763206,
0.0049247658,
0.3311274052,
0.3124427795,
-0.0720552802,
0.6099219322,
0.171725899,
0.0952751935,
0.1119719371,
0.029650867,
0.3789330423,
-0.4122756422,
-0.0405048914,
-0.3064041734,
-0.2962744832,
-0.2522681355,
0.236525178,
-0.2202251405,
0.10406... |
https://github.com/huggingface/datasets/issues/4107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | It's not related to the dataset viewer in itself. I can replicate the error with:
```
>>> import datasets as ds
>>> d = ds.load_dataset('Pavithree/explainLikeImFive')
Using custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51
Downloading and preparing dataset json/Pavithree--explainLikeImFiv... | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. How... | 238 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.... | [
-0.3489049077,
-0.2126835436,
-0.0853866413,
0.6558037996,
0.0363948494,
0.1970855147,
-0.0027793292,
0.158196941,
-0.0848671123,
0.0457959361,
-0.2831988335,
0.2404507697,
-0.0769439712,
0.2767483592,
0.0081944736,
-0.0411505923,
-0.0918209106,
0.0279832762,
-0.3015930653,
0.0... |
https://github.com/huggingface/datasets/issues/4107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no "\n")
You need to have one JSON object per line | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. How... | 44 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.... | [
-0.3020937145,
-0.1118903235,
-0.0869743228,
0.6097387671,
0.0103007723,
0.1638597548,
0.0684817135,
0.3277797401,
-0.1092157215,
-0.0566383749,
-0.1367329955,
0.2391771823,
-0.1279131472,
0.2545633316,
0.0279258993,
-0.1423965991,
-0.0660212561,
0.0504808687,
-0.1899272949,
0.... |
https://github.com/huggingface/datasets/issues/4107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | I'm closing this issue.
@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it. | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. How... | 20 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.... | [
-0.3932261467,
-0.171912685,
-0.0922261029,
0.6554157734,
-0.0530094169,
0.1677156091,
0.0370100662,
0.3199276328,
-0.0745022669,
-0.0058091404,
-0.2643892765,
0.3230986595,
-0.0923588872,
0.284509629,
-0.0286665168,
-0.1054721102,
-0.1154459268,
0.1232891157,
-0.2891207933,
0.... |
https://github.com/huggingface/datasets/issues/4105 | push to hub fails with huggingface-hub 0.5.0 | Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:
https://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_datase... | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The data... | 52 | push to hub fails with huggingface-hub 0.5.0
## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your... | [
-0.1861857921,
-0.4311237633,
0.0562602133,
0.0067312154,
0.1298479736,
-0.1801872402,
0.2390425503,
0.3798781037,
0.0936848372,
0.1841657311,
-0.2482326925,
0.110427022,
0.1387443841,
0.1746835858,
-0.0180780683,
0.0716850311,
0.3145363331,
0.0539176762,
0.0701007843,
-0.10772... |
https://github.com/huggingface/datasets/issues/4105 | push to hub fails with huggingface-hub 0.5.0 | I'll be sending a fix for this later today on the `huggingface_hub` side.
The error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here:
https://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-... | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The data... | 179 | push to hub fails with huggingface-hub 0.5.0
## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your... | [
-0.1779681742,
-0.3613972366,
0.0396610759,
-0.0418735817,
0.1493157446,
-0.1146919504,
0.2596389055,
0.3919741809,
0.05671902,
0.2154313326,
-0.1614336967,
0.1872852892,
0.028987322,
0.0989561081,
-0.0163298436,
0.0642805248,
0.3249468207,
0.0863210186,
0.1453489214,
-0.061145... |
https://github.com/huggingface/datasets/issues/4105 | push to hub fails with huggingface-hub 0.5.0 | We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.
Let me open a PR :) | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The data... | 19 | push to hub fails with huggingface-hub 0.5.0
## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your... | [
-0.2125267088,
-0.4315749109,
0.0522043929,
0.0482284613,
0.1155988574,
-0.152396366,
0.2406621128,
0.3914851844,
0.0944237262,
0.180986762,
-0.14784877,
0.2039597631,
0.0422484353,
0.1827221364,
0.048739858,
0.1180150509,
0.3609755635,
0.0867603123,
0.0845860764,
-0.0766905546... |
https://github.com/huggingface/datasets/issues/4104 | Add time series data - stock market | @INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | 55 | Add time series data - stock market
## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data... | [
-0.4105685353,
-0.1561338454,
-0.2027363032,
-0.0204125643,
0.2966762185,
-0.0465460904,
0.2751809657,
0.2120451927,
0.2372612059,
-0.0356467552,
0.1677028984,
0.3154208362,
-0.5273308158,
0.195855543,
0.1156736985,
-0.3401099145,
0.0221978128,
0.2243168503,
-0.2521656156,
0.02... |
https://github.com/huggingface/datasets/issues/4104 | Add time series data - stock market | Thankyou. This is how raw data looks like before cleaning for an individual stocks:
1. https://github.com/INF800/marktech/tree/raw-data/f/data/raw
2. https://github.com/INF800/marktech/tree/raw-data/t/data/raw
3. https://github.com/INF800/marktech/tree/raw-data/rdfn/data/raw
4. https://github.com/INF800/marktech/... | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | 170 | Add time series data - stock market
## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data... | [
-0.3687590063,
0.0491522364,
-0.1846979558,
-0.1505386233,
0.1651411802,
-0.0994768068,
0.0917972699,
0.4598021805,
0.322442323,
-0.0860317126,
-0.1389852911,
0.0632767677,
-0.1486255229,
0.1610254794,
-0.1391530931,
-0.1620848626,
-0.0475153774,
0.3720344007,
-0.2158550918,
-0... |
https://github.com/huggingface/datasets/issues/4104 | Add time series data - stock market | thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format. | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | 19 | Add time series data - stock market
## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data... | [
-0.3537880778,
-0.1289308071,
-0.1909164041,
-0.0531446673,
0.2729803622,
-0.0599869117,
0.2987921834,
0.2014784515,
0.1608087122,
-0.1183106378,
0.035830643,
0.374917835,
-0.509500742,
0.2756154239,
0.0508593582,
-0.2635550201,
-0.0248505399,
0.2634266615,
-0.2932980061,
0.136... |
https://github.com/huggingface/datasets/issues/4104 | Add time series data - stock market | @INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just y... | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | 138 | Add time series data - stock market
## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data... | [
-0.2683302462,
-0.1127558574,
-0.1953907758,
0.0229131617,
0.1751265973,
-0.1244739145,
0.2042384595,
0.2058556378,
0.1586600244,
-0.132167086,
0.0927702934,
0.1553098857,
-0.3484842479,
0.1320489943,
0.007181549,
-0.081337519,
0.0058147465,
0.2504152656,
-0.2737147808,
0.00821... |
https://github.com/huggingface/datasets/issues/4104 | Add time series data - stock market | > @INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just... | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | 169 | Add time series data - stock market
## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data... | [
-0.2783481181,
-0.1050131246,
-0.2014273405,
0.0014405655,
0.2315014452,
-0.1322029382,
0.1948897839,
0.2405988723,
0.1763896197,
-0.124039337,
0.0741609856,
0.2053776681,
-0.3298498392,
0.1638270169,
0.0152853625,
-0.1208609119,
0.0423803627,
0.2550012767,
-0.2647140622,
0.007... |
https://github.com/huggingface/datasets/issues/4101 | How can I download only the train and test split for full numbers using load_dataset()? | Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the... | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | 121 | How can I download only the train and test split for full numbers using load_dataset()?
How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.
Hi! Can ... | [
-0.4222614765,
-0.2911455035,
-0.0373176709,
0.2390582561,
0.0026366422,
0.1689539254,
0.3794758916,
0.5034552217,
-0.0622511506,
0.3114699423,
-0.4362156391,
-0.0604955181,
-0.002635078,
0.5183431506,
-0.0064853602,
-0.032337375,
-0.0514517091,
0.4439148009,
0.0781294852,
-0.1... |
https://github.com/huggingface/datasets/issues/4099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | Hi @andreybond, thanks for reporting.
Unfortunately, I'm not able to able to reproduce your issue:
```python
In [4]: from datasets import load_dataset
...: datasets = load_dataset("nielsr/XFUN", "xfun.ja")
In [5]: datasets
Out[5]:
DatasetDict({
train: Dataset({
features: ['id', 'input_ids',... | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected resu... | 123 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from dat... | [
-0.2249613404,
-0.011080306,
-0.1182144508,
0.3766305745,
0.4748912454,
0.0701707974,
0.1921900362,
0.4249811769,
-0.2420484126,
0.2307839394,
-0.0892712697,
0.1180207953,
-0.0562943369,
-0.1161260679,
-0.0263075139,
-0.1048763469,
-0.0841410831,
0.1983186901,
-0.1048670858,
0.... |
https://github.com/huggingface/datasets/issues/4099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | I opened a PR in the original dataset loading script:
- microsoft/unilm#677
and fixed the corresponding dataset script on the Hub:
- https://huggingface.co/datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected resu... | 23 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from dat... | [
-0.2249613404,
-0.011080306,
-0.1182144508,
0.3766305745,
0.4748912454,
0.0701707974,
0.1921900362,
0.4249811769,
-0.2420484126,
0.2307839394,
-0.0892712697,
0.1180207953,
-0.0562943369,
-0.1161260679,
-0.0263075139,
-0.1048763469,
-0.0841410831,
0.1983186901,
-0.1048670858,
0.... |
https://github.com/huggingface/datasets/issues/4099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | import sys
sys.getdefaultencoding()
returned: 'utf-8'
---------------------
I've just cloned master branch - your fix works! Thank you! | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected resu... | 17 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from dat... | [
-0.2249613404,
-0.011080306,
-0.1182144508,
0.3766305745,
0.4748912454,
0.0701707974,
0.1921900362,
0.4249811769,
-0.2420484126,
0.2307839394,
-0.0892712697,
0.1180207953,
-0.0562943369,
-0.1161260679,
-0.0263075139,
-0.1048763469,
-0.0841410831,
0.1983186901,
-0.1048670858,
0.... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | Hi @jacobbieker, thanks for your request and study of possible alternatives.
We are very interested in finding a way to make `datasets` useful to you.
Looking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.stor... | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 145 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.3244537413,
-0.1263985485,
-0.0251052957,
-0.19540371,
0.0318522863,
0.1099500433,
0.0460034385,
0.2352094352,
0.3041895926,
0.3589913845,
-0.3856135905,
0.2067565471,
-0.1894839406,
0.6700428128,
0.0357849188,
-0.1144960523,
-0.0306996517,
0.1443083733,
0.1345930248,
0.1236... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | Ah okay, I missed the option of zip files for zarr, I'll try that with our repos and see if it works! Thanks a lot! | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 25 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.3069048822,
-0.1402800679,
-0.0268999655,
-0.2003018409,
0.0169465691,
0.0881531388,
0.0473213904,
0.2528563738,
0.2585322559,
0.3653848171,
-0.3894171417,
0.2703041434,
-0.1643589288,
0.6827343702,
0.0235095546,
-0.0982035771,
-0.0452914014,
0.1175467148,
0.086384736,
0.103... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | On behalf of the Zarr developers, let me say THANK YOU for working to support Zarr on HF! 🙏 Zarr is a 100% open-source and community driven project (fiscally sponsored by NumFocus). We see it as an ideal format for ML training datasets, particularly in scientific domains.
I think the solution of zipping the Zarr s... | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 105 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.2788939178,
-0.147772342,
-0.029817408,
-0.18541646,
0.0416014902,
0.1000066698,
0.0665872097,
0.2096813023,
0.2738592327,
0.3396220803,
-0.382730037,
0.2304085493,
-0.1977883279,
0.6521241069,
-0.0014642775,
-0.113314949,
-0.0459332429,
0.1129214987,
0.1409105808,
0.1152855... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | Also just noting here that I was able to lazily open @jacobbieker's dataset over the internet from HF hub 🚀 !
```python
import xarray as xr
url = "https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip"
zip_url = 'zip:///::' + url
ds = xr.open_dataset(zip_url, engine='zarr', chu... | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 44 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.3418928981,
-0.1032550186,
-0.0035129667,
-0.1810500622,
0.0350911804,
0.096979931,
0.0983485952,
0.2509669363,
0.280400157,
0.3807622194,
-0.3892695904,
0.2021344602,
-0.2067263871,
0.6755539775,
0.0208346564,
-0.0940590426,
-0.0458303057,
0.097645916,
0.1507327408,
0.13307... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | However, I wasn't able to get streaming working using the Datasets api:
```python
from datasets import load_dataset
ds = load_dataset("openclimatefix/mrms", streaming=True, split='train')
item = next(iter(ds))
```
<details>
<summary>FileNotFoundError traceback</summary>
```
No config specified, defaultin... | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 511 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.3376382887,
-0.1447214186,
-0.0166332684,
-0.190961197,
0.0310323145,
0.0883312225,
0.0709957927,
0.2156327367,
0.3004430234,
0.3126606047,
-0.3831919432,
0.2008132637,
-0.1836350858,
0.6807723641,
0.0051706191,
-0.125469178,
-0.0495122373,
0.1112825796,
0.1189140752,
0.1210... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | I'm still messing around with that dataset, so the data might have moved. I currently have each year of MRMS precipitation rate data as it's own zarr, but as they are quite large (on order of 100GB each) I'm working to split them into single days, and as such they are still being moved around, I was just trying to get ... | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 67 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.3190587759,
-0.1435207874,
-0.0245140158,
-0.1862643212,
0.0163273029,
0.0820723474,
0.0546844229,
0.2420020849,
0.2953155339,
0.3457222283,
-0.3818388581,
0.2018218488,
-0.1873717606,
0.6717906594,
0.0114869382,
-0.1103143618,
-0.0425726138,
0.1324858963,
0.1384666562,
0.10... |
https://github.com/huggingface/datasets/issues/4096 | Add support for streaming Zarr stores for hosted datasets | I've mostly finished rearranging the data now and uploading some more, so this works now:
```python
import datasets
ds = datasets.load_dataset("openclimatefix/mrms", streaming=True, split="train")
item = next(iter(ds))
print(item.keys())
print(item["timestamp"])
```
The MRMS data now goes most of 2016-2022, w... | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | 47 | Add support for streaming Zarr stores for hosted datasets
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support s... | [
-0.3146088719,
-0.1528874487,
-0.0179425385,
-0.1865329444,
0.0183558688,
0.0835854858,
0.0692972615,
0.2356062531,
0.3060990572,
0.3447516263,
-0.382489413,
0.2060237825,
-0.1839284152,
0.6715689301,
0.0098326411,
-0.1216924414,
-0.0395249762,
0.1172740087,
0.1384003758,
0.105... |
https://github.com/huggingface/datasets/issues/4093 | elena-soare/crawled-ecommerce: missing dataset | By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.
Anyway, we're working to give a hint about this in the dataset viewer. | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 44 | elena-soare/crawled-ecommerce: missing dataset
elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does no... | [
-0.1207687259,
-0.00286071,
-0.1318676025,
0.0635192245,
0.1874110699,
0.0794985294,
0.1254684627,
0.1737336367,
0.2639740407,
0.1173906922,
0.0001631233,
0.3512991071,
-0.2119093835,
0.393607378,
0.1455774158,
0.0183856077,
0.0039338302,
0.1056110561,
-0.0455912836,
-0.0736951... |
https://github.com/huggingface/datasets/issues/4093 | elena-soare/crawled-ecommerce: missing dataset | Fixed. See https://huggingface.co/datasets/elena-soare/crawled-ecommerce/viewer/elena-soare--crawled-ecommerce/train.
<img width="1552" alt="Capture d’écran 2022-04-12 à 11 23 51" src="https://user-images.githubusercontent.com/1676121/162929722-2e2b80e2-154a-4b61-87bd-e341bd6c46e6.png">
Thanks for reporting! | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 16 | elena-soare/crawled-ecommerce: missing dataset
elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Fixed. See https://huggingface.co/datasets/elena-soare/crawled-ecommerce/viewer/elena-soare--crawled-ecomm... | [
-0.1609660685,
-0.0299393609,
-0.1024643183,
0.0108296284,
0.2179085165,
0.0710694343,
0.3409802318,
0.1342082471,
0.2152989656,
0.0440464281,
0.0219532382,
0.2419576049,
-0.1733666211,
0.5085983872,
0.1597574055,
-0.0792821422,
0.01585651,
0.0759733543,
0.0767374039,
-0.146800... |
https://github.com/huggingface/datasets/issues/4091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:
* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)
* storing the data in a JSON/CSV/Parquet/TXT file and using `Datase... | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I la... | 161 | Build a Dataset One Example at a Time Without Loading All Data Into Memory
**Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I wan... | [
-0.4209350944,
-0.1163850501,
0.0371200852,
0.2598604262,
0.1629025936,
0.1357177198,
0.1164348423,
0.1412475556,
-0.0145409498,
0.311784476,
0.301664412,
0.1858533174,
-0.4372403026,
0.3365070522,
0.2986663282,
-0.1264118701,
0.061615102,
0.2598187625,
0.0788301155,
0.23423369... |
https://github.com/huggingface/datasets/issues/4086 | Dataset viewer issue for McGill-NLP/feedbackQA | Hi @cslizc, thanks for reporting.
I have just forced the refresh of the corresponding cache and the preview is working now. | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 4... | 21 | Dataset viewer issue for McGill-NLP/feedbackQA
## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview does... | [
-0.3954632282,
0.1076467335,
0.0597850569,
0.268419832,
0.0577657856,
0.0790255964,
0.1990231723,
0.2965825796,
0.3226885796,
0.0235358924,
-0.1500443667,
0.317918241,
-0.0765112713,
0.0183652528,
0.1574045718,
-0.2308083475,
0.0183359478,
0.2394710928,
-0.4619171917,
0.1835368... |
https://github.com/huggingface/datasets/issues/4085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | Now, I can't find any reference to set_progress_bar_enabled in the code.
I think it have been deleted | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'se... | 17 | datasets.set_progress_bar_enabled(False) not working in datasets v2
## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Ac... | [
-0.0829407796,
0.201350823,
-0.2333347797,
-0.181510359,
0.2128977478,
0.0463434458,
0.3892688751,
0.2463106215,
-0.2829350829,
0.188040182,
0.1953664273,
0.4442572594,
-0.1529558152,
0.1356378645,
0.0363089293,
0.1370674819,
0.0853452832,
0.4177482128,
0.0124697797,
0.12783683... |
https://github.com/huggingface/datasets/issues/4085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | Hi @virilo,
Please note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):
- #3897
Now, you should update your code to use `datasets.logging.disable_progress_bar`.
You have more info in our docs: [Logging methods](https://huggingfa... | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'se... | 43 | datasets.set_progress_bar_enabled(False) not working in datasets v2
## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Ac... | [
-0.23593162,
-0.0723468736,
-0.0901387483,
-0.1551112086,
0.1876892149,
0.0073150396,
0.3980372846,
0.2445177138,
-0.2226462364,
0.1619218141,
0.1409943253,
0.3994379342,
-0.0901407003,
0.3656436503,
-0.0918397233,
0.0712929592,
0.0047217146,
0.2816918492,
-0.1408428103,
0.1292... |
https://github.com/huggingface/datasets/issues/4084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | Hi @blackhat-coder, thanks for reporting.
Please note that the `transformers` library updated their data collators API last year (version 4.10.0):
- huggingface/transformers#13105
now requiring to pass `return_tensors` argument at Data Collator instantiation.
And therefore, we also updated in the `datasets` l... | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
impo... | 69 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorW... | [
0.0014983554,
-0.3668529093,
0.0487470515,
0.3086224496,
0.306428194,
-0.0805893764,
0.5264433026,
0.197694093,
-0.131404236,
0.198815316,
-0.1122300774,
0.2043118179,
-0.0832389817,
-0.0521473959,
0.0176639147,
-0.1576557904,
0.2189375013,
0.0713736266,
-0.3227119446,
-0.23758... |
https://github.com/huggingface/datasets/issues/4080 | NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset | Hi @richarddwang,
Indeed, we have recently updated the loading script of that dataset (and fixed that bug as well):
- #4002
That fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:
- installing `datasets` from our GitHub repo:
```bash
pip install g... | ## Steps to reproduce the bug
```python
datasets.load_dataset("conll2012_ontonotesv5", "english_v12")
```
## Actual results
```
Downloading builder script: 32.2kB [00:00, 9.72MB/s]
Downloading metadata: 20.0kB [00:00, 10... | 83 | NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
## Steps to reproduce the bug
```python
datasets.load_dataset("conll2012_ontonotesv5", "english_v12")
```
## Actual results
```
Downloading builder script: 32.2kB [00:00, 9.72MB/s] ... | [
-0.2226921618,
0.0298873633,
-0.1225759909,
0.4105227888,
0.4203901887,
-0.0344975702,
0.1825835556,
0.5437704325,
0.0015196162,
0.3724113703,
-0.1495408565,
0.0289296228,
-0.0552448817,
-0.0323211811,
-0.3194609284,
0.2004786134,
-0.1826365739,
0.1168000475,
-0.259441644,
-0.1... |
https://github.com/huggingface/datasets/issues/4075 | Add CCAgT dataset | Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you
PS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable. | ## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample ... | 44 | Add CCAgT dataset
## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields be... | [
-0.1380248368,
-0.0510347672,
-0.1507763714,
-0.1637343168,
-0.2237448841,
0.0262921527,
0.3314746618,
0.2329936326,
-0.3870379925,
0.2819751799,
-0.1180503592,
-0.1471827924,
-0.0498764142,
0.3397229612,
0.3003416955,
-0.0329681262,
-0.1039718017,
0.2203275412,
-0.0873562768,
... |
https://github.com/huggingface/datasets/issues/4074 | Error in google/xtreme_s dataset card | Hi @wranai, thanks for reporting.
Please note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).
If that information is wrong, feel free to contact the paper's authors to suggest t... | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| 74 | Error in google/xtreme_s dataset card
**Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
Hi @wranai, thank... | [
-0.0567339621,
-0.4629551768,
-0.0684647858,
0.0106164282,
0.2992708087,
-0.0419798046,
0.2006966919,
0.0464683659,
0.188306734,
0.0227226075,
-0.1915179193,
0.1476319432,
-0.1994033903,
-0.3060621023,
-0.1459291428,
-0.0106945504,
0.3847017586,
-0.2466863394,
0.3047521114,
-0.... |
https://github.com/huggingface/datasets/issues/4071 | Loading issue for xuyeliu/notebookCDG dataset | Hi @Jun-jie-Huang,
As the error message says, ".pkl" data files are not supported.
If you would like to share your dataset on the Hub, you would need:
- either to create a Python loading script, that loads the data in any format
- or to transform your data files to one of the supported formats (listed in the er... | ## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset = load_dataset("xuyeliu/notebookCDG/dataset_note... | 103 | Loading issue for xuyeliu/notebookCDG dataset
## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset ... | [
-0.1085482165,
-0.3377104998,
-0.0297132451,
0.3047091067,
0.3508042395,
0.2135736942,
0.0682771355,
0.2976689935,
0.093271248,
0.049439434,
-0.3122802675,
0.0151163721,
0.0351014175,
0.2110976577,
0.1611018181,
0.1252132803,
0.0795666873,
0.2287419289,
-0.1855393797,
-0.146676... |
https://github.com/huggingface/datasets/issues/4062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | Hi @aapot, thanks for reporting.
We are investigating the cause of this issue. We will keep you informed. | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than ... | 18 | Loading mozilla-foundation/common_voice_7_0 dataset failed
## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it.... | [
-0.17519629,
-0.1618486941,
-0.0165102147,
0.484456867,
0.1696994454,
-0.1695972383,
0.3660569489,
0.2703646719,
0.0957671702,
0.2007128745,
-0.2335407734,
0.7396367788,
0.1339332759,
-0.0338983499,
-0.498156786,
-0.014100668,
-0.0565661527,
0.2537570894,
0.5223573446,
-0.07576... |
https://github.com/huggingface/datasets/issues/4062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | When making HTTP request from code line:
```
response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
```
it cannot be decoded to JSON because it raises a 404 Not Found error.
The request is fixed if removing the `/{use_cdn}` from the URL.
Maybe there was a change in the Com... | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than ... | 52 | Loading mozilla-foundation/common_voice_7_0 dataset failed
## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it.... | [
-0.17519629,
-0.1618486941,
-0.0165102147,
0.484456867,
0.1696994454,
-0.1695972383,
0.3660569489,
0.2703646719,
0.0957671702,
0.2007128745,
-0.2335407734,
0.7396367788,
0.1339332759,
-0.0338983499,
-0.498156786,
-0.014100668,
-0.0565661527,
0.2537570894,
0.5223573446,
-0.07576... |
https://github.com/huggingface/datasets/issues/4062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | I have also made the hotfix for all the rest of Common Voice script versions: 8.0, 6.1, 6.0,..., 1.0 | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than ... | 19 | Loading mozilla-foundation/common_voice_7_0 dataset failed
## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it.... | [
-0.17519629,
-0.1618486941,
-0.0165102147,
0.484456867,
0.1696994454,
-0.1695972383,
0.3660569489,
0.2703646719,
0.0957671702,
0.2007128745,
-0.2335407734,
0.7396367788,
0.1339332759,
-0.0338983499,
-0.498156786,
-0.014100668,
-0.0565661527,
0.2537570894,
0.5223573446,
-0.07576... |
https://github.com/huggingface/datasets/issues/4061 | Loading cnn_dailymail dataset failed | Hi @Arij-Aladel, thanks for reporting.
This issue was already reported
- #3784
and its root cause is a change in the Google Drive service. See:
- #3786
We have already fixed it in our 2.0.0 release. See:
- #3787
Please, update your `datasets` version:
```
pip install -U datasets
```
and retry load... | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.... | 66 | Loading cnn_dailymail dataset failed
## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datase... | [
0.0792619735,
0.0476147495,
0.0661631152,
0.2864684165,
0.2174385339,
0.0220448487,
0.6165331602,
-0.0665946901,
0.0802651271,
0.3715786338,
-0.187194854,
0.0376833156,
-0.3215862811,
0.144794479,
-0.0683642775,
-0.055460494,
0.0013137509,
0.0096665099,
0.0716198385,
-0.0511580... |
https://github.com/huggingface/datasets/issues/4057 | `load_dataset` consumes too much memory | Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?
```python
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
audio_tarfile.members = [] # free memory
key += 1
```
and then you can set `DEFAULT_W... |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
... | 85 | `load_dataset` consumes too much memory
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the disc... | [
-0.5270071626,
0.1517996639,
0.0852088183,
0.5047675967,
0.3265749812,
0.0132140256,
0.1932862401,
0.1147883758,
0.0141111249,
0.2565394938,
0.1888922006,
0.3404781222,
-0.3148767948,
-0.149151355,
0.2170335501,
-0.2428890318,
0.0671121553,
0.0240160543,
-0.0828772336,
0.167154... |
https://github.com/huggingface/datasets/issues/4057 | `load_dataset` consumes too much memory | I also run out of memory when loading `mozilla-foundation/common_voice_8_0` that also uses `tarfile` via `dl_manager.iter_archive`. There seems to be some data files that stay in memory somewhere
I don't have the issue with other compression formats like gzipped files |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
... | 39 | `load_dataset` consumes too much memory
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the disc... | [
-0.5270071626,
0.1517996639,
0.0852088183,
0.5047675967,
0.3265749812,
0.0132140256,
0.1932862401,
0.1147883758,
0.0141111249,
0.2565394938,
0.1888922006,
0.3404781222,
-0.3148767948,
-0.149151355,
0.2170335501,
-0.2428890318,
0.0671121553,
0.0240160543,
-0.0828772336,
0.167154... |
https://github.com/huggingface/datasets/issues/4057 | `load_dataset` consumes too much memory | I'm facing a similar memory leak issue when loading cv8. As you said @lhoestq
`load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token=True, writer_batch_size=1)`
This issue is happening on a 32GB RAM machine.
Any updates on how to fix this? |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
... | 34 | `load_dataset` consumes too much memory
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the disc... | [
-0.5270071626,
0.1517996639,
0.0852088183,
0.5047675967,
0.3265749812,
0.0132140256,
0.1932862401,
0.1147883758,
0.0141111249,
0.2565394938,
0.1888922006,
0.3404781222,
-0.3148767948,
-0.149151355,
0.2170335501,
-0.2428890318,
0.0671121553,
0.0240160543,
-0.0828772336,
0.167154... |
https://github.com/huggingface/datasets/issues/4057 | `load_dataset` consumes too much memory | I've run a memory profiler to see where's the leak comes from:

... it seems that it's related to the tarfile lib buffer reader. But I don't know why it's only happening on the huggingface script |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
... | 37 | `load_dataset` consumes too much memory
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the disc... | [
-0.5270071626,
0.1517996639,
0.0852088183,
0.5047675967,
0.3265749812,
0.0132140256,
0.1932862401,
0.1147883758,
0.0141111249,
0.2565394938,
0.1888922006,
0.3404781222,
-0.3148767948,
-0.149151355,
0.2170335501,
-0.2428890318,
0.0671121553,
0.0240160543,
-0.0828772336,
0.167154... |
https://github.com/huggingface/datasets/issues/4056 | Unexpected behavior of _TempDirWithCustomCleanup | Hi ! Would setting TMPDIR at the beginning of your python script/session work ? I mean, even before importing transformers, datasets, etc. and using them ? I think this would be the most robust solution given any library that uses `tempfile`. I don't think we aim to support environment variables to be changed at run ti... | ## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I ... | 56 | Unexpected behavior of _TempDirWithCustomCleanup
## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory sho... | [
-0.1382411718,
0.0732321143,
0.0444734991,
0.1359774321,
0.3479242325,
-0.1434310377,
0.3700523078,
0.0896510184,
0.1531162858,
0.0932117403,
0.0671044961,
0.2713250816,
-0.2865733206,
-0.1255628169,
0.0599307828,
0.0733479559,
0.0170448963,
0.0981921852,
-0.044797428,
0.077652... |
https://github.com/huggingface/datasets/issues/4056 | Unexpected behavior of _TempDirWithCustomCleanup | Hi, yeah setting the environment variable before the imports / as environment variable outside is another way to fix this. I am just arguing that `datasets` already uses its own global variable to track temporary files: `_TEMP_DIR_FOR_TEMP_CACHE_FILES`, and the creation of this global variable should respect TMPDIR ins... | ## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I ... | 55 | Unexpected behavior of _TempDirWithCustomCleanup
## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory sho... | [
-0.1449772716,
0.1017516702,
0.0424512066,
0.218184799,
0.3738672435,
-0.1418617368,
0.3822785318,
0.0815746412,
0.1883887798,
0.1180614084,
0.0584380887,
0.3046331406,
-0.2868727744,
-0.141238451,
0.0635020658,
0.1509432048,
0.093168132,
0.1052301526,
-0.0392923579,
0.04964935... |
https://github.com/huggingface/datasets/issues/4052 | metric = metric_cls( TypeError: 'NoneType' object is not callable | Hi @klyuhang9,
I'm sorry but I can't reproduce your problem:
```python
In [2]: metric = load_metric('glue', 'rte')
Downloading builder script: 5.76kB [00:00, 2.40MB/s]
```
Could you please, retry to load the metric? Sometimes there are temporary connectivity issues.
Feel free to re-open this issue of the p... | Hi, friend. I meet a problem.
When I run the code:
`metric = load_metric('glue', 'rte')`
There is a problem raising:
`metric = metric_cls(
TypeError: 'NoneType' object is not callable `
I don't know why. Thanks for your help!
| 48 | metric = metric_cls( TypeError: 'NoneType' object is not callable
Hi, friend. I meet a problem.
When I run the code:
`metric = load_metric('glue', 'rte')`
There is a problem raising:
`metric = metric_cls(
TypeError: 'NoneType' object is not callable `
I don't know why. Thanks for your help!
Hi @klyuha... | [
-0.3547110558,
-0.285168469,
0.0674027652,
0.4340596199,
0.5599153042,
-0.1259054989,
0.1781918555,
0.0732804239,
0.5064229369,
0.388438046,
-0.0952099413,
-0.0343400426,
-0.2323544025,
-0.0094401091,
0.013013727,
-0.2152413577,
-0.3156155944,
0.208661288,
-0.1466532797,
0.0227... |
https://github.com/huggingface/datasets/issues/4051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | Hi @klyuhang9,
I'm sorry but I can't reproduce your problem:
```python
In [4]: ds = load_dataset("glue", "sst2", download_mode="force_redownload")
Downloading builder script: 28.8kB [00:00, 9.15MB/s] ... | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your... | 127 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2... | [
-0.1783030629,
-0.2639328241,
-0.0785378516,
0.255351305,
0.4212648869,
-0.1310216933,
0.0855867043,
0.1723617762,
0.1153524145,
0.0744840801,
-0.1830375046,
-0.2903265059,
0.2351995707,
0.2608192265,
0.0308783352,
-0.0604974516,
-0.2182005495,
-0.0651673451,
-0.2639884651,
0.2... |
https://github.com/huggingface/datasets/issues/4051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | > Are you able to access the URL in your web browser?
Yes, with or without a VPN, we (people in China) can access the URL. And we can even use wget to download these files. We can download the pretrained language model automatically with the code.
However, we CANNOT access glue.py & metric.py automatically. Every t... | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your... | 92 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2... | [
-0.0887089148,
-0.0862539336,
-0.0573623367,
0.1572923809,
0.4484336078,
-0.1821688861,
0.1712555885,
0.0600959547,
0.0482329875,
0.31027475,
-0.3809802532,
-0.1139330268,
0.2884299755,
0.0781208426,
0.2631626725,
-0.2650811374,
-0.0775150433,
-0.1534373313,
-0.0863932297,
0.10... |
https://github.com/huggingface/datasets/issues/4051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | > ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
>
> I don't know why; it is ok when I use
If you would query the question `ConnectionError: Couldn't reach` in www.baidu.com (Chinese Google, Google is banned and some people cannot access it), you... | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your... | 71 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2... | [
-0.1205978841,
-0.1314036846,
-0.0964849815,
0.1441070139,
0.3036913276,
-0.2433213741,
0.3088020384,
0.0240532774,
0.0838955417,
0.2139024884,
-0.1504199207,
-0.4797526002,
0.3455486298,
0.156710878,
0.2045820206,
-0.1082189977,
-0.0511952899,
-0.2484922856,
-0.0435467549,
0.2... |
https://github.com/huggingface/datasets/issues/4048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file. | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/t... | 20 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded th... | [
-0.2346806824,
-0.2431967854,
-0.0588007942,
0.3447958529,
0.1111236513,
0.1701889783,
0.2422357649,
0.3307168484,
-0.1298684776,
-0.1044707671,
-0.0315656923,
-0.0911954492,
0.0494157672,
0.3862031996,
-0.1741270274,
-0.2562993765,
-0.033623632,
-0.0654815286,
0.0936895758,
0.... |
https://github.com/huggingface/datasets/issues/4048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | Hi @trentonstrong, thanks for reporting!
I confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', ... | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/t... | 80 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded th... | [
-0.2926377356,
-0.205703944,
-0.0327636302,
0.3231838048,
0.1312947273,
0.1685381085,
0.2297412753,
0.326946497,
-0.1544476748,
-0.0512351878,
-0.0219043437,
-0.0902590007,
-0.0226249397,
0.3637925088,
-0.2130923122,
-0.2291255593,
-0.0529080331,
-0.0600396022,
0.0281890929,
0.... |
https://github.com/huggingface/datasets/issues/4047 | Dataset.unique(column: str) -> ArrowNotImplementedError | Hi @orkenstein, thanks for reporting.
Please note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique
And currently the Apache Arrow `unique` function is only implemente... | ## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
dataset['train'].col... | 77 | Dataset.unique(column: str) -> ArrowNotImplementedError
## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
datas... | [
0.0132888332,
-0.3006761074,
0.0046368022,
0.2457811236,
0.4343053401,
-0.0023811236,
0.4016603827,
0.0470401198,
-0.309460938,
0.2648711205,
0.143573314,
0.8505437374,
-0.2576568127,
-0.2999766469,
0.2220703661,
-0.0687537193,
-0.014586973,
0.3260400891,
-0.0370167755,
-0.2393... |
https://github.com/huggingface/datasets/issues/4047 | Dataset.unique(column: str) -> ArrowNotImplementedError | As a workaround solution you can use pandas:
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en', split='train')
df = dataset.to_pandas()
unique_df = df[~df.tokens.apply(tuple).duplicated()] # from https://stackoverflow.com/a/46958336/17517845
```
Note that pandas loads the da... | ## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
dataset['train'].col... | 43 | Dataset.unique(column: str) -> ArrowNotImplementedError
## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
datas... | [
0.0132888332,
-0.3006761074,
0.0046368022,
0.2457811236,
0.4343053401,
-0.0023811236,
0.4016603827,
0.0470401198,
-0.309460938,
0.2648711205,
0.143573314,
0.8505437374,
-0.2576568127,
-0.2999766469,
0.2220703661,
-0.0687537193,
-0.014586973,
0.3260400891,
-0.0370167755,
-0.2393... |
https://github.com/huggingface/datasets/issues/4041 | Add support for IIIF in datasets | Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of "using IIIF through datasets scripts" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs... | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Inte... | 145 | Add support for IIIF in datasets
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/... | [
-0.1007219777,
0.1306869388,
-0.2485406399,
0.0732913092,
0.0615220107,
-0.0613504313,
0.3199918866,
0.249089241,
-0.0972456187,
0.2104076147,
0.2102556974,
0.3343870044,
-0.0445597731,
0.1891069412,
-0.2587096095,
-0.1272345036,
0.0565985963,
0.2181202173,
0.2407312393,
0.1161... |
https://github.com/huggingface/datasets/issues/4040 | Calling existing metrics from other metrics | That's definitely the way to go to avoid implementation bugs and making sure we can fix issues globally when we detect them in a metric. Thanks for reporting! | There are several cases of metrics calling other metrics, e.g. [Wiki Split](https://huggingface.co/metrics/wiki_split) which calls [BLEU](https://huggingface.co/metrics/bleu) and [SARI](https://huggingface.co/metrics/sari). These are all currently re-implemented each time (often with external code).
A potentially mo... | 28 | Calling existing metrics from other metrics
There are several cases of metrics calling other metrics, e.g. [Wiki Split](https://huggingface.co/metrics/wiki_split) which calls [BLEU](https://huggingface.co/metrics/bleu) and [SARI](https://huggingface.co/metrics/sari). These are all currently re-implemented each time (... | [
-0.0992335007,
-0.3176507056,
-0.0027762309,
0.2402402461,
0.1573642194,
-0.1997049898,
0.2155405879,
0.3818409741,
0.3093086183,
0.3482067585,
-0.2740235031,
0.152377069,
0.1712296903,
0.1119225174,
0.0648053437,
0.0045856629,
-0.1488883495,
-0.0911815464,
-0.0938948095,
0.190... |
https://github.com/huggingface/datasets/issues/4037 | Error while building documentation | After some investigation, maybe the bug is in `doc-builder`.
I've opened an issue there:
- huggingface/doc-builder#160 | ## Describe the bug
Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem... | 16 | Error while building documentation
## Describe the bug
Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to f... | [
-0.2004220933,
-0.2878024876,
0.044751592,
0.3931261599,
0.2458143383,
0.2145320177,
0.0947938785,
0.3291164339,
-0.2940780222,
0.1139430776,
-0.0294580385,
0.3712480068,
0.0195837785,
0.1452521086,
0.222070381,
0.0811928436,
0.1843611747,
0.2558033168,
-0.1485099941,
0.0294798... |
https://github.com/huggingface/datasets/issues/4031 | Cannot load the dataset conll2012_ontonotesv5 | Hi @cathyxl, thanks for reporting.
Indeed, we have recently updated the loading script of that dataset (and fixed that bug as well):
- #4002
That fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:
- installing `datasets` from our GitHub repo:
```bash... | ## Describe the bug
Cannot load the dataset conll2012_ontonotesv5
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test")
print(dataset)
```
## Expected results
The datasets s... | 82 | Cannot load the dataset conll2012_ontonotesv5
## Describe the bug
Cannot load the dataset conll2012_ontonotesv5
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test")
print(datase... | [
-0.36119169,
0.2047729641,
-0.1006570682,
0.3793435097,
0.3481520414,
0.1764991432,
0.1972295791,
0.2790656388,
0.0689822882,
0.0948170722,
-0.0786364004,
0.1520210207,
0.087703757,
0.0457675606,
-0.1818695366,
0.2106245905,
-0.1176593527,
-0.0729893297,
-0.2436235398,
-0.00738... |
https://github.com/huggingface/datasets/issues/4029 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold | Hi ! You can access the faiss index with
```python
faiss_index = my_dataset.get_index("my_index_name").faiss_index
```
and then do whatever you want with it, e.g. query it using range_search:
```python
threshold = 0.95
limits, distances, indices = faiss_index.range_search(x=xq, thresh=threshold)
texts = datas... | **Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I wou... | 41 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
**Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity ... | [
-0.2661004364,
-0.4004480243,
-0.0754508525,
-0.1700696051,
-0.1645520478,
-0.1373969018,
-0.1056541055,
0.41735816,
0.2868964672,
0.103979528,
-0.6091675758,
-0.2352726609,
0.0454713143,
-0.0635443032,
-0.0842217878,
-0.2311537415,
0.0668498427,
-0.0359133035,
0.0118583888,
0.... |
https://github.com/huggingface/datasets/issues/4029 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold | wow, that's great, thank you for the explanation. (if that's not already in the documentation, could be worth adding it)
which type of faiss index is Datasets using? I looked into faiss recently and I understand that there are several different types of indexes and the choice is important, e.g. regarding which dista... | **Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I wou... | 76 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
**Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity ... | [
-0.2390260547,
-0.3951448202,
-0.0858207047,
-0.1125378683,
-0.2033657581,
-0.1890951246,
-0.0322773196,
0.2989536226,
0.3635897338,
0.1370907128,
-0.6294348836,
-0.3008572459,
0.1010479778,
-0.0439846553,
-0.1321897656,
-0.1272775978,
0.1106223017,
-0.0324691646,
-0.0548350662,
... |
https://github.com/huggingface/datasets/issues/4029 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold | `Dataset.add_faiss_index` has a `string_factory` parameter, used to set the type of index (see the faiss documentation about [index factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory)). Alternatively, you can pass an index you've defined yourself using faiss with the `custom_index` parameter of `... | **Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I wou... | 44 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
**Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity ... | [
-0.2393752635,
-0.429771781,
-0.055745814,
-0.1359849721,
-0.190990597,
-0.1674889177,
-0.0928243846,
0.3763890862,
0.2726443112,
0.1312080473,
-0.6072084308,
-0.281306386,
0.0541279502,
0.0495217778,
-0.0667902008,
-0.2007020563,
0.0892190859,
-0.0314913839,
0.0219883062,
0.07... |
https://github.com/huggingface/datasets/issues/4027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | Hi, @MoritzLaurer, thanks for reporting.
Normally this is due to a mismatch between the versions of your Elasticsearch client and server:
- your ES client is passing only keyword arguments to your ES server
- whereas your ES server expects a positional argument called 'scheme'
In order to fix this, you should a... | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`sq... | 93 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_d... | [
-0.0140874581,
-0.2049568743,
-0.0280129109,
-0.215163976,
-0.0470537134,
0.2034882158,
0.3370672762,
0.0452060737,
-0.0271646194,
0.2955159247,
0.1846259385,
0.3122886717,
0.0821779594,
-0.294043541,
0.1780700982,
-0.1691921651,
0.0989522785,
0.0922932029,
0.3542940319,
-0.004... |
https://github.com/huggingface/datasets/issues/4027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | 1. Check elasticsearch version
```
import elasticsearch
print(elasticsearch.__version__)
```
Ex: 7.9.1
2. Uninstall current elasticsearch package
`pip uninstall elasticsearch`
3. Install elasticsearch 7.9.1 package
`pip install elasticsearch==7.9.1` | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`sq... | 27 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_d... | [
-0.0109289438,
-0.2084149718,
-0.0303450655,
-0.2152194828,
-0.0493917428,
0.1944043636,
0.3211393356,
0.0536367297,
-0.03062631,
0.2950673997,
0.1684849858,
0.3123598099,
0.0790885463,
-0.2982064486,
0.1783048511,
-0.1695177555,
0.0953667462,
0.0893834084,
0.3597412705,
-0.000... |
https://github.com/huggingface/datasets/issues/4015 | Can not correctly parse the classes with imagefolder | I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue. | ## Describe the bug
I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect.
## Steps to reproduce the bug
I organized my dataset (ImageNet) in the following structure:
```
- imagenet/
- train/
- n01440764/
- ILSVRC2012_val_00000293.jpg
... | 35 | Can not correctly parse the classes with imagefolder
## Describe the bug
I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect.
## Steps to reproduce the bug
I organized my dataset (ImageNet) in the following structure:
```
- imagenet/
- train/
- n0144... | [
-0.1322387606,
-0.1414094269,
0.0466167293,
0.8640832305,
0.3122998774,
-0.0405159071,
0.2753280699,
-0.0128311384,
0.1333018541,
0.0721253455,
-0.1745733321,
0.1838635951,
-0.3539470434,
0.004239677,
-0.140884757,
-0.3193480074,
-0.0576342233,
-0.0626386032,
-0.3752850294,
0.0... |
https://github.com/huggingface/datasets/issues/4013 | Cannot preview "hazal/Turkish-Biomedical-corpus-trM" | Hi @hazalturkmen, thanks for reporting.
Note that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.
When there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. ... | ## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://h... | 151 | Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Messag... | [
-0.1714381129,
-0.4351093769,
-0.0200963765,
0.1386701763,
-0.0835818425,
0.2694072127,
-0.1579447091,
0.5032237172,
-0.0876431242,
0.0863246843,
-0.3543643653,
0.0361363962,
0.1225201637,
-0.1530092061,
0.1078694612,
-0.1199042127,
0.0038383873,
0.0361451618,
-0.139961645,
0.0... |
https://github.com/huggingface/datasets/issues/4007 | set_format does not work with multi dimension tensor | Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :
```python
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}, features=Features({"A": Array2D(shape=(2, 2), dtype="float32")}))
```
| ## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result... | 39 | set_format does not work with multi dimension tensor
## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.fro... | [
-0.3587706685,
-0.5875425935,
-0.1062819958,
0.1135559827,
0.3834376633,
0.0190233551,
0.7371442318,
0.4463867545,
-0.1085590795,
0.1774856895,
-0.0060109273,
0.3397403657,
-0.0223507583,
0.0364833735,
0.0363337211,
-0.410631597,
-0.010092766,
0.20786874,
0.034042947,
-0.098432... |
https://github.com/huggingface/datasets/issues/4007 | set_format does not work with multi dimension tensor | Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.
```
dataset = load_dataset("my_dataset.py")
dataset = dataset.map(lambda example: blabla(example))
dict_dataset_test = dataset["tes... | ## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result... | 84 | set_format does not work with multi dimension tensor
## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.fro... | [
-0.2998945117,
-0.3101813197,
-0.004203714,
0.1721643806,
0.3681077659,
0.1039255038,
0.7871420383,
0.3923230469,
0.0725022554,
-0.0425644591,
-0.0091565456,
0.4055368006,
-0.308324635,
0.2198695391,
0.1341205984,
-0.1996601969,
0.245597586,
0.1359011084,
-0.0783709288,
0.08726... |
https://github.com/huggingface/datasets/issues/4007 | set_format does not work with multi dimension tensor | Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:
```python
dataset = dataset.map(lambda example: blabla(example), features=Features(features))
```
Or you can use `cast` after `map` to do that:
```python
dataset = dataset.map(lambda example: blabla(ex... | ## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result... | 47 | set_format does not work with multi dimension tensor
## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.fro... | [
-0.2981496155,
-0.5522936583,
-0.0851085931,
0.0653299466,
0.4656410217,
0.0859981328,
0.7949089408,
0.4860951304,
0.0225481708,
-0.0127239274,
-0.0288295541,
0.4424655735,
-0.1430835873,
0.211206153,
0.0279673189,
-0.3929628432,
0.1122941449,
0.1394115239,
-0.0601131022,
0.027... |
https://github.com/huggingface/datasets/issues/4005 | Yelp not working | I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.
```python
>>> from datasets import load_dataset, DownloadMode
>>> import itertools
>>> # without streaming
>>> dataset = load_dataset("yelp_review_full", name="yelp_review_full", split="train", download_mode=Do... | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | 383 | Yelp not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ... | [
-0.2684922218,
-0.1000452191,
0.0092700925,
0.2753206789,
0.225215897,
0.2547465265,
0.2014655918,
0.1440787166,
0.0725047812,
-0.1681313366,
-0.0814419314,
0.0556117781,
-0.2537714541,
0.1602354795,
0.4227266312,
0.2531446517,
0.0108855283,
0.0468127578,
-0.1558508426,
-0.0012... |
https://github.com/huggingface/datasets/issues/4005 | Yelp not working | Yet another issue related to google drive not being nice. Most likely your IP has been banned from using their API programmatically. Do you know if we are allowed to host and redistribute the data ourselves ? | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | 37 | Yelp not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ... | [
-0.1030829251,
0.0789569914,
-0.0290377196,
0.2069857717,
0.2367415577,
0.256686002,
0.1672261208,
0.090243727,
0.2104639411,
-0.1844935864,
-0.1545828283,
-0.0403038822,
-0.141902402,
0.4606811106,
0.2723676562,
0.214605391,
0.0647458956,
-0.1103528962,
-0.0088406717,
-0.05134... |
https://github.com/huggingface/datasets/issues/4005 | Yelp not working | Hi,
Facing the same issue while loading the dataset:
`Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files`
Thanks | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | 18 | Yelp not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ... | [
-0.2433613688,
-0.0627518445,
-0.0243318845,
0.3065802753,
0.1636055112,
0.2314516455,
0.0817174464,
0.1403423846,
0.2367408723,
-0.1264958382,
-0.1559065133,
0.0483694412,
-0.1122050509,
0.3317652345,
0.322280705,
0.3135497272,
0.1267696321,
0.1067256033,
-0.2009281069,
-0.087... |
https://github.com/huggingface/datasets/issues/4005 | Yelp not working | > Facing the same issue while loading the dataset:
>
> Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files
Thanks for reporting. I think this is the same issue. Feel free to try again later, once Google Drive stopped blocking you. You can retry by passing `download_mode="force_redownl... | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | 49 | Yelp not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ... | [
-0.2601876557,
0.0929835513,
-0.0132896332,
0.223574087,
0.1756577641,
0.3243186474,
0.2636046112,
0.0949091241,
0.2453200966,
-0.1416160613,
-0.0435501635,
-0.0566096306,
-0.1322851032,
0.4521706998,
0.2629017234,
0.2652128339,
0.0626221374,
0.0157334041,
0.0147341508,
-0.0080... |
https://github.com/huggingface/datasets/issues/4005 | Yelp not working | I noticed that FastAI hosts the Yelp dataset at https://s3.amazonaws.com/fast-ai-nlp/yelp_review_full_csv.tgz (from their catalog [here](https://course.fast.ai/datasets))
Let's update the yelp dataset script to download from there instead of Google Drive | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | 28 | Yelp not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ... | [
-0.160453096,
0.0351268873,
-0.0071308794,
0.2098716348,
0.116348125,
0.2942898571,
0.3275651932,
0.1411800832,
0.1628912389,
-0.1722296476,
-0.1306029707,
0.0065327794,
-0.0765037984,
0.2644135952,
0.4251793921,
0.3969155848,
0.1644589901,
0.0061269891,
0.0145501494,
-0.038482... |
https://github.com/huggingface/datasets/issues/4005 | Yelp not working | I updated the link to not use Google Drive anymore, we will do a release early next week with the updated download url of the dataset :) | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | 27 | Yelp not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ... | [
-0.157910496,
-0.0615004785,
-0.0020499565,
0.2565931678,
0.158556357,
0.2892256081,
0.248630926,
0.0601966418,
0.1381704211,
-0.1495863348,
-0.071196951,
0.0036801726,
-0.2479333878,
0.3354057968,
0.4319044352,
0.2149279118,
0.1020882651,
-0.0000929914,
-0.0839474052,
-0.03827... |
https://github.com/huggingface/datasets/issues/4003 | ASSIN2 dataset checksum bug | Using latest code, I am still facing the issue.
```python
(base) vimos@vimosmu ➜ ~ ipython
Python 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from... | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------------------------------------... | 264 | ASSIN2 dataset checksum bug
## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------... | [
-0.1604346335,
0.1361487955,
-0.044581797,
0.3485568464,
0.1233015954,
0.1475777626,
0.2357688695,
0.3987540305,
0.2274781168,
0.1705752313,
-0.0864149258,
-0.0482718199,
0.1566106677,
-0.0062123756,
-0.0911366418,
0.2126840055,
-0.0068694004,
-0.0687911958,
-0.3707773387,
0.14... |
https://github.com/huggingface/datasets/issues/4003 | ASSIN2 dataset checksum bug | That's true. Steps to reproduce the bug on Google Colab:
```
git clone https://github.com/huggingface/datasets.git
cd datasets
pip install -e .
python -c "from datasets import load_dataset; print(load_dataset('assin2')['train'][0])"
```
However the dataset will load without any problems if you just install v... | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------------------------------------... | 58 | ASSIN2 dataset checksum bug
## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------... | [
-0.1604346335,
0.1361487955,
-0.044581797,
0.3485568464,
0.1233015954,
0.1475777626,
0.2357688695,
0.3987540305,
0.2274781168,
0.1705752313,
-0.0864149258,
-0.0482718199,
0.1566106677,
-0.0062123756,
-0.0911366418,
0.2126840055,
-0.0068694004,
-0.0687911958,
-0.3707773387,
0.14... |
https://github.com/huggingface/datasets/issues/4003 | ASSIN2 dataset checksum bug | Right indeed ! Let me open a PR to fix this.
The dataset_infos.json file that stores some metadata about the dataset to download (and is used to verify it was correctly downloaded) hasn't been updated correctly | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------------------------------------... | 36 | ASSIN2 dataset checksum bug
## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------... | [
-0.1604346335,
0.1361487955,
-0.044581797,
0.3485568464,
0.1233015954,
0.1475777626,
0.2357688695,
0.3987540305,
0.2274781168,
0.1705752313,
-0.0864149258,
-0.0482718199,
0.1566106677,
-0.0062123756,
-0.0911366418,
0.2126840055,
-0.0068694004,
-0.0687911958,
-0.3707773387,
0.14... |
https://github.com/huggingface/datasets/issues/4003 | ASSIN2 dataset checksum bug | Not sure what the status of this is, but personally I am still getting this error, with glue. | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------------------------------------... | 18 | ASSIN2 dataset checksum bug
## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
----------------------------------------... | [
-0.1604346335,
0.1361487955,
-0.044581797,
0.3485568464,
0.1233015954,
0.1475777626,
0.2357688695,
0.3987540305,
0.2274781168,
0.1705752313,
-0.0864149258,
-0.0482718199,
0.1566106677,
-0.0062123756,
-0.0911366418,
0.2126840055,
-0.0068694004,
-0.0687911958,
-0.3707773387,
0.14... |
https://github.com/huggingface/datasets/issues/4001 | How to use generate this multitask dataset for SQUAD? I am getting a value error. | Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues. | ## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine.
I tried the comma... | 32 | How to use generate this multitask dataset for SQUAD? I am getting a value error.
## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However,... | [
-0.2936679125,
-0.2275516689,
0.0001496938,
0.3350782692,
0.1465720385,
0.0890369639,
0.4746434093,
0.0026763659,
0.0098888371,
0.3609529436,
0.0395857804,
0.7048646212,
-0.127891928,
0.1743522584,
-0.1156685278,
-0.0938021094,
-0.0134771653,
-0.0328665636,
0.0233489629,
0.0834... |
https://github.com/huggingface/datasets/issues/4001 | How to use generate this multitask dataset for SQUAD? I am getting a value error. | But I request you to please fix the same in the dataset hub explorer as well... | ## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine.
I tried the comma... | 16 | How to use generate this multitask dataset for SQUAD? I am getting a value error.
## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However,... | [
-0.329690069,
-0.3230187893,
-0.0332358442,
0.4044612944,
0.1531959772,
0.1048138514,
0.4619649351,
-0.0753525868,
0.0021969171,
0.3927633166,
-0.0693045408,
0.6919897199,
-0.0519510061,
0.2268344313,
-0.2223974615,
-0.1197615638,
-0.0032792459,
-0.0445733778,
0.0295023844,
0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.