html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k β | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/4291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | Hi @leondz, thanks for reporting.
Indeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.
In particular, in your case, th... | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | 103 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. W... | [
-0.5941502452,
0.0154772494,
0.1116633341,
0.2606789768,
-0.152930513,
-0.1338221431,
0.2904717922,
0.2681366503,
0.0844315961,
0.2371055633,
0.0127624674,
0.1334278733,
-0.3677476943,
0.0229738876,
0.0893161371,
-0.2069136202,
-0.0901445225,
0.4387164414,
-0.1709370613,
-0.035... |
https://github.com/huggingface/datasets/issues/4291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :) | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | 37 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. W... | [
-0.6053087711,
-0.0000201275,
0.0799933597,
0.2253192961,
-0.1501957178,
-0.1735897362,
0.2938115001,
0.2592690885,
0.0822659805,
0.301726222,
0.0461583324,
0.022501966,
-0.3796801269,
0.0506925099,
0.03596,
-0.1937340349,
-0.0915672556,
0.3536639512,
-0.1370633394,
-0.04723620... |
https://github.com/huggingface/datasets/issues/4287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/data... | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly... | 102 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(... | [
-0.2677792013,
-0.1930617243,
-0.0503546819,
0.2116217315,
0.2999023795,
0.0705516487,
0.6779098511,
0.3354715109,
0.1389086396,
0.4872550368,
0.1502405405,
0.3312829435,
0.1982175112,
-0.3838970661,
-0.1113043576,
0.026361268,
0.2100861073,
0.2211518586,
0.0979964212,
0.011295... |
https://github.com/huggingface/datasets/issues/4287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | Adding here the complete error traceback!
```
Traceback (most recent call last):
File "/home/alvarobartt/lol.py", line 12, in <module>
ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`
File "/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_datase... | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly... | 66 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(... | [
-0.2677792013,
-0.1930617243,
-0.0503546819,
0.2116217315,
0.2999023795,
0.0705516487,
0.6779098511,
0.3354715109,
0.1389086396,
0.4872550368,
0.1502405405,
0.3312829435,
0.1982175112,
-0.3838970661,
-0.1113043576,
0.026361268,
0.2100861073,
0.2211518586,
0.0979964212,
0.011295... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | Thanks for reporting, @vblagoje.
Indeed, I noticed some of these issues while reviewing this PR:
- #4259
This is in my TODO list. | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 23 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1009263843,
0.2100123912,
-0.0518254675,
0.2251940221,
-0.1097796932,
-0.0576233305,
0.2680732906,
0.3371189237,
-0.0792645589,
0.2063141167,
0.1244209558,
0.5698289275,
0.4415481985,
0.3652537167,
-0.0546431914,
-0.1742747575,
0.3567141891,
0.0840137601,
0.1305603534,
-0.09... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.
For example, other datasets also flatten "question.stem" into "question":
- ai2_arc:
```python
question = data["question"]["stem"]
choice... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 132 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1097743586,
0.1326161772,
-0.0441396423,
0.251501888,
-0.2379210591,
-0.0238394011,
0.2634883523,
0.3610416353,
0.006400961,
0.1764746308,
0.096433647,
0.4849393666,
0.4549894035,
0.4911120832,
-0.148104012,
-0.160447374,
0.3868280947,
0.0276735201,
0.2539488673,
-0.03565153... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | @albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just b... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 107 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.0696031824,
0.2296478003,
-0.0100900801,
0.0402288064,
-0.1834874302,
-0.1042137891,
0.319383949,
0.3728416562,
-0.0084602879,
0.1532329619,
0.0812204257,
0.3126558065,
0.5082780123,
0.2665573657,
-0.1158585623,
-0.245972991,
0.3876905143,
0.1590896994,
0.0666596964,
-0.1644... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I'm opening a PR that adds the missing fields.
Let's agree on the feature structure: @lhoestq @mariosasko @polinaeterna | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 18 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1227659434,
0.1294834018,
-0.0531327948,
0.2436132431,
-0.1115282178,
-0.0573373921,
0.1977688372,
0.2891437709,
-0.115660876,
0.2128254175,
0.1821909398,
0.4630895257,
0.37812379,
0.3454271257,
-0.0517650545,
-0.3028847873,
0.3833729029,
0.1199520379,
0.2903484404,
-0.00973... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case). | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 26 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1161505356,
0.1762668341,
-0.0998141393,
0.2082637399,
-0.117515862,
-0.067944631,
0.2634468973,
0.3842664659,
-0.0682339072,
0.2268134803,
0.0829303637,
0.5433103442,
0.3696924746,
0.374489814,
-0.0723104253,
-0.1654941887,
0.2737470865,
0.0790645778,
0.0742814839,
-0.07375... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibi... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 81 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1998205185,
0.2370988578,
-0.0784825236,
0.0383832976,
-0.0463750511,
-0.1121220514,
0.261579901,
0.4792057276,
-0.0928937942,
0.1985219419,
0.0985915437,
0.5298961401,
0.2929408848,
0.4118284881,
-0.169930324,
-0.23902376,
0.3220477998,
0.0963060707,
0.0373929888,
-0.029769... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.
There is always the tension between:
- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),
- and on th... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 161 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.244081974,
0.2624767125,
-0.0215143748,
0.2153030038,
-0.2133324444,
-0.063490659,
0.3279465735,
0.3787684143,
0.0790012851,
0.2448695004,
0.0920588821,
0.5058540702,
0.2577792108,
0.5256162882,
-0.0657963902,
-0.0864537507,
0.3222070932,
-0.0016847457,
0.1259201318,
-0.0332... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | @albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 69 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.0945750922,
0.164955914,
-0.1062555909,
0.1807774007,
-0.1605589986,
-0.1824282259,
0.2929793894,
0.3721559942,
-0.0281362962,
0.237318486,
0.0815383717,
0.4941643178,
0.3443228602,
0.3875853419,
-0.1219676882,
-0.1653728038,
0.1791916639,
0.1058904976,
0.0738654286,
-0.0472... |
https://github.com/huggingface/datasets/issues/4271 | A typo in docs of datasets.disable_progress_bar | Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :) | ## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable". | 17 | A typo in docs of datasets.disable_progress_bar
## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :) | [
-0.2229197919,
0.1155907437,
-0.1957840174,
-0.2332064658,
0.1984051019,
-0.018780956,
0.2855718732,
0.2007148415,
-0.1886951476,
0.3785544336,
0.2363237292,
0.3476401567,
0.1750877202,
0.3231857717,
-0.1807082295,
0.05160008,
0.0662440285,
0.2353480756,
-0.1192478538,
0.097619... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for ["word"](https://en.wiktionary.org/wiki/word),
Pronunciation
([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:Inte... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 38 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Hi @i-am-neo, thanks for reporting.
Normally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.
Also note that last commit "Add metadata" (https://huggingface.co/datasets/bigscience-catalogue-lm-dat... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 100 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that! | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 26 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | All datasets are private now.
Re:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`) | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 18 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Thanks a lot, @cakiki.
@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 32 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Thanks for letting me know, @albertvillanova @cakiki.
Any chance of having a subset alpha version in the meantime?
I only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.
Would like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issu... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 88 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Hey @i-am-neo,
Cool to hear that you're working on Robust ASR! Feel free to drop me a mail :-) | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 19 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | @i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)
You're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-c... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 19 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it! | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 34 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Hi @xflashxx, thanks for reporting.
Please note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis
We are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply. | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 39 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Thanks for reporting this @xflashxx. I'll have a look and get back to you on this. | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 16 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Hi @xflashxx and @albertvillanova,
I have updated the files with de-duplicated splits. Apparently the debate portals from which part of the examples were sourced had unique timestamps for some examples (up to 6%; updated counts in the README) without any actual content updated that lead to "new" items. The length of... | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 164 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Thanks @shahbazsyed for your fast fix.
As a side note:
- Your email appearing as Point of Contact in the dataset README has a typo: @uni.leipzig.de instead of @uni-leipzig.de
- Your commits on the Hub are not linked to your profile on the Hub: this is because we use the email address to make this link; the email a... | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 74 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4248 | conll2003 dataset loads original data. | Thanks for reporting @sue99.
Unfortunately. I'm not able to reproduce your problem:
```python
In [1]: import datasets
...: from datasets import load_dataset
...: dataset = load_dataset("conll2003")
In [2]: dataset
Out[2]:
DatasetDict({
train: Dataset({
features: ['id', 'tokens', 'pos_ta... | ## Describe the bug
I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text.
Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ?
## Steps to... | 158 | conll2003 dataset loads original data.
## Describe the bug
I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text.
Is this a bug or should I use another dataset_name ... | [
0.0448263213,
0.2318243235,
0.0067998567,
0.4537388086,
0.0293568652,
0.0392917953,
0.3254444003,
0.3471554816,
-0.4062146246,
-0.0317827091,
-0.1551049054,
0.396732986,
0.0610170625,
0.2233855128,
0.008579554,
0.1582965702,
0.2671114206,
0.3820191026,
-0.0271611772,
-0.1606874... |
https://github.com/huggingface/datasets/issues/4247 | The data preview of XGLUE | Thanks for reporting @czq1999.
Note that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.
That is the case for XGLUE dataset (as the error message points out): this must be refactored to support streaming. | It seems that something wrong with the data previvew of XGLUE | 43 | The data preview of XGLUE
It seems that something wrong with the data previvew of XGLUE
Thanks for reporting @czq1999.
Note that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.
That is the case for XGLUE dataset (as the error message points out): this mus... | [
-0.5521460772,
-0.2582035363,
-0.0797492862,
0.0360034965,
0.12222258,
-0.0657666773,
0.1930060834,
0.3570640087,
-0.1430572718,
0.2119454741,
0.055040326,
0.1502947807,
-0.0410023369,
0.2117510885,
-0.1211941168,
-0.1035284176,
-0.0353827253,
0.1561267525,
-0.0141941961,
-0.03... |
https://github.com/huggingface/datasets/issues/4241 | NonMatchingChecksumError when attempting to download GLUE | Hi :)
I think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:
```py
pip install -U datasets
```
Then you can download:
```py
from datasets import load_dataset
ds = load_dataset("glue", "rte")
``` | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to ... | 51 | NonMatchingChecksumError when attempting to download GLUE
## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redown... | [
0.101099439,
-0.0903691947,
0.0413370356,
0.3583336473,
0.1278837472,
0.0999461636,
-0.1866522282,
0.334582448,
0.4952836633,
-0.1053451374,
-0.1394158453,
0.1311296523,
-0.0658304244,
-0.1457225382,
0.0314150602,
0.1970958412,
-0.0847944096,
0.1571736187,
-0.1785757393,
0.0489... |
https://github.com/huggingface/datasets/issues/4241 | NonMatchingChecksumError when attempting to download GLUE | This appears to work. Thank you!
On Wed, Apr 27, 2022, 1:18 PM Steven Liu ***@***.***> wrote:
> Hi :)
>
> I think your issue may be related to the older nlp library. I was able to
> download glue with the latest version of datasets. Can you try updating
> with:
>
> pip install -U datasets
>
> Then you can download:
>... | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to ... | 110 | NonMatchingChecksumError when attempting to download GLUE
## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redown... | [
0.101099439,
-0.0903691947,
0.0413370356,
0.3583336473,
0.1278837472,
0.0999461636,
-0.1866522282,
0.334582448,
0.4952836633,
-0.1053451374,
-0.1394158453,
0.1311296523,
-0.0658304244,
-0.1457225382,
0.0314150602,
0.1970958412,
-0.0847944096,
0.1571736187,
-0.1785757393,
0.0489... |
https://github.com/huggingface/datasets/issues/4238 | Dataset caching policy | Hi @loretoparisi, thanks for reporting.
There is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode="force_redownload")`.
Please, let me know if this fixes your problem.
I can confirm you that your dataset loads w... | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | 87 | Dataset caching policy
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/... | [
0.0123936096,
0.2971666455,
-0.0013778415,
0.2417485565,
0.2187064886,
0.223115772,
0.2011682987,
0.3845502436,
0.047587432,
-0.114898473,
-0.0990488455,
-0.2024639547,
-0.043204993,
-0.1892879307,
-0.1385836452,
0.0513460636,
0.111731194,
-0.0098836049,
0.2588098049,
0.0954153... |
https://github.com/huggingface/datasets/issues/4238 | Dataset caching policy | @albertvillanova thank you, it seems it still does not work using:
```python
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
download_mode="force_redownload"
)
```
[This](https://colab.research.googl... | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | 125 | Dataset caching policy
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/... | [
0.0123936096,
0.2971666455,
-0.0013778415,
0.2417485565,
0.2187064886,
0.223115772,
0.2011682987,
0.3845502436,
0.047587432,
-0.114898473,
-0.0990488455,
-0.2024639547,
-0.043204993,
-0.1892879307,
-0.1385836452,
0.0513460636,
0.111731194,
-0.0098836049,
0.2588098049,
0.0954153... |
https://github.com/huggingface/datasets/issues/4238 | Dataset caching policy | SOLVED! The problem was the with the file itself, using caching parameter helped indeed.
Thanks for helping! | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | 17 | Dataset caching policy
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/... | [
0.0123936096,
0.2971666455,
-0.0013778415,
0.2417485565,
0.2187064886,
0.223115772,
0.2011682987,
0.3845502436,
0.047587432,
-0.114898473,
-0.0990488455,
-0.2024639547,
-0.043204993,
-0.1892879307,
-0.1385836452,
0.0513460636,
0.111731194,
-0.0098836049,
0.2588098049,
0.0954153... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Thanks for reporting. I understand it's an error in the dataset script. To reproduce:
```python
>>> import datasets as ds
>>> split_names = ds.get_dataset_split_names("mozilla-foundation/common_voice_8_0", use_auth_token="**********")
Downloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββ... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 151 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Thanks for reporting. I understand it's an error in the dataset script. To reproduce:
```python
>>> import datasets as ds
>>> split_names = ds.get_dataset_split_names("mozilla-foundation/common_voice... | [
-0.6561425328,
-0.004003874,
0.0054231901,
0.1310704499,
0.2854442596,
0.2682386339,
0.4165343642,
0.3768597543,
0.1568085253,
0.109484069,
-0.2961753607,
-0.0419526771,
-0.2016284615,
-0.134046182,
0.0574726276,
0.144657135,
-0.1379270107,
0.2300461233,
0.4713993371,
-0.108759... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.
Unfortunately I'm not able to reproduce the error.
I think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https://huggingface.co/datasets/mozilla-foundation/comm... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 70 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.
Unfortunately I'm not able to reproduce the error.
I think the error has to do with authentication with `huggingface_... | [
-0.3818594217,
-0.2081790715,
0.0576239452,
0.2595281303,
0.2959937453,
0.3319201767,
0.3500491679,
0.1816910803,
0.2192064673,
0.0894685835,
-0.5308890939,
-0.1895972043,
-0.136817351,
-0.2363358289,
0.1168588027,
0.0489676781,
-0.0462487526,
0.1060479879,
0.5292467475,
-0.134... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!
```python
>>> from huggingface_hub import HfApi, HfFolder
>>> auth_token = "hf_app_******"
>>> t = HfApi().whoami(auth_token)... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 105 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!
```python
>... | [
-0.2868961394,
-0.1114272773,
0.054076951,
0.0521808229,
0.2599704862,
0.1541742533,
0.5229559541,
0.2172734588,
0.2576426268,
0.1800563484,
-0.5013438463,
-0.2839031816,
-0.0835964233,
-0.018772684,
0.026273394,
0.0892328024,
-0.0147990091,
0.0623915866,
0.4439668953,
-0.20272... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | We can workaround this with
```python
email = HfApi().whoami(auth_token).get("email", "system@huggingface.co")
```
in the common voice scripts | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 16 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
We can workaround this with
```python
email = HfApi().whoami(auth_token).get("email", "system@huggingface.co")
```
in the common voice scripts | [
-0.4095231295,
0.0020364104,
0.0243135188,
0.146247983,
0.2652965188,
0.2654664814,
0.4983243048,
0.3220642507,
0.3277715743,
0.2375538051,
-0.5881981254,
-0.2214309275,
0.0245003197,
0.0251563713,
0.2471105903,
0.0379921906,
-0.1972731203,
0.1432468742,
0.3389151096,
-0.202468... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Hmmm, does this mean that any person who downloads the common voice dataset will be logged as "system@huggingface.co"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right? | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 35 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Hmmm, does this mean that any person who downloads the common voice dataset will be logged as "system@huggingface.co"? If so, it would defeat the purpose of sending the user's email to the commonvoice API... | [
-0.1696701944,
0.1444740295,
0.0457174219,
0.2471178919,
-0.0200229455,
0.3168714345,
0.5003870726,
0.0581495129,
0.3065966666,
-0.0658347681,
-0.356238693,
-0.1942665726,
-0.0795760378,
-0.109813571,
0.2577853203,
0.2352219075,
-0.012772751,
0.1207207665,
0.3894254565,
-0.3368... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.
Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody e... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 61 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.
Additionally, looking at the code, I think we should implem... | [
-0.4233382344,
0.2953563631,
0.0905707181,
-0.0621563978,
0.099763453,
0.1812495142,
0.7674459815,
0.2327231616,
0.2935930789,
0.1703319103,
-0.0979548171,
0.0021814017,
-0.0064492924,
-0.1200801954,
-0.0111923395,
0.2903728783,
-0.2454335988,
0.1893628985,
0.1547878832,
-0.187... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Hmm I don't agree here.
Anybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the "correct" email but to just whatever and it would work.
Note that someone only has visibility on the code after havin... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 111 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Hmm I don't agree here.
Anybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the "cor... | [
-0.3276700377,
0.2826412618,
0.0286960993,
0.1016934812,
-0.0353756994,
0.0222149193,
0.6439489722,
0.076558128,
0.3533987701,
0.2049578875,
-0.250772357,
-0.1054484919,
-0.0896840245,
-0.0457500294,
0.1518868357,
0.3686565459,
-0.2056715786,
0.1514365524,
0.1459309459,
-0.1047... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | > Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.
Yes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they ... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 136 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.
Yes, ... | [
-0.4378187358,
0.2820081115,
-0.0171392243,
-0.0312947929,
0.0072664716,
0.0896093771,
0.5135827661,
0.3128056526,
0.1830434948,
0.1410139501,
-0.3661122322,
-0.1130832881,
-0.072842598,
-0.0658885613,
0.1129050553,
0.0829916894,
-0.1138803065,
0.2530531585,
0.3841428459,
-0.25... |
https://github.com/huggingface/datasets/issues/4230 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data? | Thanks for reporting @beyondguo.
Indeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip
And that URL only contains the English version. | 
But on huggingface datasets:

Where is the German data? | 24 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?

But on huggingface datasets:
,
"key2_in_dict": datasets.Value("int32"),
...
}
],
```
Feel free to re-open thi... | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | 48 | Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance.
Hi @jordiae,
... | [
-0.046249602,
-0.4512825608,
-0.170710057,
0.1658608615,
0.093263872,
-0.0161834396,
0.1318405122,
0.1520017833,
0.4049992263,
0.1299023777,
0.1432440877,
0.223382771,
-0.070746623,
0.7212549448,
-0.1806515455,
-0.2812748551,
-0.0481466874,
0.1059780419,
0.0737303942,
0.0739523... |
https://github.com/huggingface/datasets/issues/4221 | Dictionary Feature | > Hi @jordiae,
>
> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:
>
> ```python
> "list_of_dict_feature": [
> {
> "key1_in_dict": datasets.Value("string"),
> "key2_in_dict": datasets.Value("int32"),
> ...
> }
> ],
> ```... | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | 65 | Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance.
> Hi @jordiae,... | [
-0.0429693162,
-0.4704377353,
-0.1678056717,
0.1911447197,
0.0661601573,
-0.0045956159,
0.1355632991,
0.1641891599,
0.4212374985,
0.1347170472,
0.1320910752,
0.2042677552,
-0.085127905,
0.7377755642,
-0.1715188473,
-0.2838612199,
-0.0400621369,
0.0985410661,
0.0720809624,
0.065... |
https://github.com/huggingface/datasets/issues/4217 | Big_Patent dataset broken | Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.
See related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com
To quote [@lhoestq](https://github.com/huggin... | ## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? No
| 62 | Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? N... | [
-0.4834668934,
0.104641214,
-0.0131414868,
0.3420283496,
0.1577983648,
0.0383427329,
0.1688533425,
0.2354457974,
0.2180176079,
-0.1323463321,
-0.0315063484,
-0.1041555107,
-0.2722391188,
0.4202748537,
0.2981956005,
0.1469314247,
0.1277819872,
0.017385602,
-0.0924532115,
0.14070... |
https://github.com/huggingface/datasets/issues/4217 | Big_Patent dataset broken | We should find out if the dataset license allows redistribution and contact the data owners to propose them to host their data on our Hub. | ## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? No
| 25 | Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? N... | [
-0.2477789819,
0.0562162139,
-0.0191622525,
0.4259560406,
0.0408797935,
-0.0157361049,
0.1593547463,
0.2846546471,
0.2502851188,
-0.0345466584,
-0.1826789826,
0.0505646504,
-0.3296605051,
0.306931138,
0.270731926,
0.2027589083,
0.1753144562,
-0.0609313212,
-0.0583923906,
-0.033... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | Hi @pietrolesci, thanks for reporting.
Please note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.
To handle sub-datasets with different features, we use another ap... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 69 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | Hi @albertvillanova,
Thanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality ... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 68 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDi... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 102 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train/test) and have the same features.
Note that later you will be able to push datas... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 76 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tu... | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 134 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | @albertvillanova @mariosasko thank you, with that change now I get
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-9-eeb68eeb9bec>](https://localhost:8080/#) in <module>()
11 )
12 ... | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 187 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | @mariosasko changed it like
```python
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
```
to avoid the above errorr. | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 26 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | Any update on this? Is this correct ?
> @mariosasko changed it like
>
> ```python
> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
> ```
>
> to avoid the above errorr.
| ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 41 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 28 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Hi @apsdehal! Can you verify that replacing
```python
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": get_datasets_... | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 88 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to... | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 63 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https://github.com/huggingface/datasets/pull/4100#issuecomment-1097994003. | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 21 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that disc... | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 65 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | Hi! :)
I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't j... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 46 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | Hi @ahf876828330,
As @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.
In your case, as the dataset script is local, you should better point to your local loading script:
```python
dataset = load_dataset("dataset/opus_books.py")
```
Please, feel free to re-ope... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 61 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | > Hi! :)
>
> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` i... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 77 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | The metadata file isn't a dataset so you can't turn it into one. You should try @albertvillanova's code snippet above (now merged in the docs [here](https://huggingface.co/docs/datasets/master/en/loading#local-loading-script)), which uses your local loading script `opus_books.py` to:
1. Download the actual dataset. ... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 51 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | Hi @leondz, thanks for reporting.
Indeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.
In particular, in your case, th... | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | 103 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. W... | [
-0.5941502452,
0.0154772494,
0.1116633341,
0.2606789768,
-0.152930513,
-0.1338221431,
0.2904717922,
0.2681366503,
0.0844315961,
0.2371055633,
0.0127624674,
0.1334278733,
-0.3677476943,
0.0229738876,
0.0893161371,
-0.2069136202,
-0.0901445225,
0.4387164414,
-0.1709370613,
-0.035... |
https://github.com/huggingface/datasets/issues/4291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :) | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | 37 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. W... | [
-0.6053087711,
-0.0000201275,
0.0799933597,
0.2253192961,
-0.1501957178,
-0.1735897362,
0.2938115001,
0.2592690885,
0.0822659805,
0.301726222,
0.0461583324,
0.022501966,
-0.3796801269,
0.0506925099,
0.03596,
-0.1937340349,
-0.0915672556,
0.3536639512,
-0.1370633394,
-0.04723620... |
https://github.com/huggingface/datasets/issues/4287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/data... | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly... | 102 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(... | [
-0.2677792013,
-0.1930617243,
-0.0503546819,
0.2116217315,
0.2999023795,
0.0705516487,
0.6779098511,
0.3354715109,
0.1389086396,
0.4872550368,
0.1502405405,
0.3312829435,
0.1982175112,
-0.3838970661,
-0.1113043576,
0.026361268,
0.2100861073,
0.2211518586,
0.0979964212,
0.011295... |
https://github.com/huggingface/datasets/issues/4287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | Adding here the complete error traceback!
```
Traceback (most recent call last):
File "/home/alvarobartt/lol.py", line 12, in <module>
ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`
File "/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_datase... | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly... | 66 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(... | [
-0.2677792013,
-0.1930617243,
-0.0503546819,
0.2116217315,
0.2999023795,
0.0705516487,
0.6779098511,
0.3354715109,
0.1389086396,
0.4872550368,
0.1502405405,
0.3312829435,
0.1982175112,
-0.3838970661,
-0.1113043576,
0.026361268,
0.2100861073,
0.2211518586,
0.0979964212,
0.011295... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | Thanks for reporting, @vblagoje.
Indeed, I noticed some of these issues while reviewing this PR:
- #4259
This is in my TODO list. | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 23 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1009263843,
0.2100123912,
-0.0518254675,
0.2251940221,
-0.1097796932,
-0.0576233305,
0.2680732906,
0.3371189237,
-0.0792645589,
0.2063141167,
0.1244209558,
0.5698289275,
0.4415481985,
0.3652537167,
-0.0546431914,
-0.1742747575,
0.3567141891,
0.0840137601,
0.1305603534,
-0.09... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.
For example, other datasets also flatten "question.stem" into "question":
- ai2_arc:
```python
question = data["question"]["stem"]
choice... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 132 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1097743586,
0.1326161772,
-0.0441396423,
0.251501888,
-0.2379210591,
-0.0238394011,
0.2634883523,
0.3610416353,
0.006400961,
0.1764746308,
0.096433647,
0.4849393666,
0.4549894035,
0.4911120832,
-0.148104012,
-0.160447374,
0.3868280947,
0.0276735201,
0.2539488673,
-0.03565153... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | @albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just b... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 107 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.0696031824,
0.2296478003,
-0.0100900801,
0.0402288064,
-0.1834874302,
-0.1042137891,
0.319383949,
0.3728416562,
-0.0084602879,
0.1532329619,
0.0812204257,
0.3126558065,
0.5082780123,
0.2665573657,
-0.1158585623,
-0.245972991,
0.3876905143,
0.1590896994,
0.0666596964,
-0.1644... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I'm opening a PR that adds the missing fields.
Let's agree on the feature structure: @lhoestq @mariosasko @polinaeterna | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 18 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1227659434,
0.1294834018,
-0.0531327948,
0.2436132431,
-0.1115282178,
-0.0573373921,
0.1977688372,
0.2891437709,
-0.115660876,
0.2128254175,
0.1821909398,
0.4630895257,
0.37812379,
0.3454271257,
-0.0517650545,
-0.3028847873,
0.3833729029,
0.1199520379,
0.2903484404,
-0.00973... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case). | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 26 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1161505356,
0.1762668341,
-0.0998141393,
0.2082637399,
-0.117515862,
-0.067944631,
0.2634468973,
0.3842664659,
-0.0682339072,
0.2268134803,
0.0829303637,
0.5433103442,
0.3696924746,
0.374489814,
-0.0723104253,
-0.1654941887,
0.2737470865,
0.0790645778,
0.0742814839,
-0.07375... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibi... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 81 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1998205185,
0.2370988578,
-0.0784825236,
0.0383832976,
-0.0463750511,
-0.1121220514,
0.261579901,
0.4792057276,
-0.0928937942,
0.1985219419,
0.0985915437,
0.5298961401,
0.2929408848,
0.4118284881,
-0.169930324,
-0.23902376,
0.3220477998,
0.0963060707,
0.0373929888,
-0.029769... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.
There is always the tension between:
- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),
- and on th... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 161 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.244081974,
0.2624767125,
-0.0215143748,
0.2153030038,
-0.2133324444,
-0.063490659,
0.3279465735,
0.3787684143,
0.0790012851,
0.2448695004,
0.0920588821,
0.5058540702,
0.2577792108,
0.5256162882,
-0.0657963902,
-0.0864537507,
0.3222070932,
-0.0016847457,
0.1259201318,
-0.0332... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | @albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 69 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.0945750922,
0.164955914,
-0.1062555909,
0.1807774007,
-0.1605589986,
-0.1824282259,
0.2929793894,
0.3721559942,
-0.0281362962,
0.237318486,
0.0815383717,
0.4941643178,
0.3443228602,
0.3875853419,
-0.1219676882,
-0.1653728038,
0.1791916639,
0.1058904976,
0.0738654286,
-0.0472... |
https://github.com/huggingface/datasets/issues/4271 | A typo in docs of datasets.disable_progress_bar | Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :) | ## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable". | 17 | A typo in docs of datasets.disable_progress_bar
## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :) | [
-0.2229197919,
0.1155907437,
-0.1957840174,
-0.2332064658,
0.1984051019,
-0.018780956,
0.2855718732,
0.2007148415,
-0.1886951476,
0.3785544336,
0.2363237292,
0.3476401567,
0.1750877202,
0.3231857717,
-0.1807082295,
0.05160008,
0.0662440285,
0.2353480756,
-0.1192478538,
0.097619... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for ["word"](https://en.wiktionary.org/wiki/word),
Pronunciation
([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:Inte... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 38 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Hi @i-am-neo, thanks for reporting.
Normally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.
Also note that last commit "Add metadata" (https://huggingface.co/datasets/bigscience-catalogue-lm-dat... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 100 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that! | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 26 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | All datasets are private now.
Re:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`) | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 18 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Thanks a lot, @cakiki.
@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 32 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Thanks for letting me know, @albertvillanova @cakiki.
Any chance of having a subset alpha version in the meantime?
I only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.
Would like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issu... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 88 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | Hey @i-am-neo,
Cool to hear that you're working on Robust ASR! Feel free to drop me a mail :-) | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 19 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | @i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)
You're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-c... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 19 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it! | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 34 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Hi @xflashxx, thanks for reporting.
Please note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis
We are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply. | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 39 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Thanks for reporting this @xflashxx. I'll have a look and get back to you on this. | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 16 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Hi @xflashxx and @albertvillanova,
I have updated the files with de-duplicated splits. Apparently the debate portals from which part of the examples were sourced had unique timestamps for some examples (up to 6%; updated counts in the README) without any actual content updated that lead to "new" items. The length of... | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 164 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4261 | data leakage in `webis/conclugen` dataset | Thanks @shahbazsyed for your fast fix.
As a side note:
- Your email appearing as Point of Contact in the dataset README has a typo: @uni.leipzig.de instead of @uni-leipzig.de
- Your commits on the Hub are not linked to your profile on the Hub: this is because we use the email address to make this link; the email a... | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```pyth... | 74 | data leakage in `webis/conclugen` dataset
## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate sample... | [
-0.36332196,
-0.0242240075,
-0.1429641396,
0.402130723,
-0.3650673032,
-0.0415427275,
0.2132989019,
0.3561701775,
-0.1063962504,
-0.1708764583,
-0.1198273748,
0.3781647682,
0.1482660472,
-0.1364054382,
0.0034131473,
-0.0325714909,
0.0117843514,
0.0138980728,
-0.1497119665,
-0.1... |
https://github.com/huggingface/datasets/issues/4248 | conll2003 dataset loads original data. | Thanks for reporting @sue99.
Unfortunately. I'm not able to reproduce your problem:
```python
In [1]: import datasets
...: from datasets import load_dataset
...: dataset = load_dataset("conll2003")
In [2]: dataset
Out[2]:
DatasetDict({
train: Dataset({
features: ['id', 'tokens', 'pos_ta... | ## Describe the bug
I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text.
Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ?
## Steps to... | 158 | conll2003 dataset loads original data.
## Describe the bug
I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text.
Is this a bug or should I use another dataset_name ... | [
0.0448263213,
0.2318243235,
0.0067998567,
0.4537388086,
0.0293568652,
0.0392917953,
0.3254444003,
0.3471554816,
-0.4062146246,
-0.0317827091,
-0.1551049054,
0.396732986,
0.0610170625,
0.2233855128,
0.008579554,
0.1582965702,
0.2671114206,
0.3820191026,
-0.0271611772,
-0.1606874... |
https://github.com/huggingface/datasets/issues/4247 | The data preview of XGLUE | Thanks for reporting @czq1999.
Note that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.
That is the case for XGLUE dataset (as the error message points out): this must be refactored to support streaming. | It seems that something wrong with the data previvew of XGLUE | 43 | The data preview of XGLUE
It seems that something wrong with the data previvew of XGLUE
Thanks for reporting @czq1999.
Note that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.
That is the case for XGLUE dataset (as the error message points out): this mus... | [
-0.5521460772,
-0.2582035363,
-0.0797492862,
0.0360034965,
0.12222258,
-0.0657666773,
0.1930060834,
0.3570640087,
-0.1430572718,
0.2119454741,
0.055040326,
0.1502947807,
-0.0410023369,
0.2117510885,
-0.1211941168,
-0.1035284176,
-0.0353827253,
0.1561267525,
-0.0141941961,
-0.03... |
https://github.com/huggingface/datasets/issues/4241 | NonMatchingChecksumError when attempting to download GLUE | Hi :)
I think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:
```py
pip install -U datasets
```
Then you can download:
```py
from datasets import load_dataset
ds = load_dataset("glue", "rte")
``` | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to ... | 51 | NonMatchingChecksumError when attempting to download GLUE
## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redown... | [
0.101099439,
-0.0903691947,
0.0413370356,
0.3583336473,
0.1278837472,
0.0999461636,
-0.1866522282,
0.334582448,
0.4952836633,
-0.1053451374,
-0.1394158453,
0.1311296523,
-0.0658304244,
-0.1457225382,
0.0314150602,
0.1970958412,
-0.0847944096,
0.1571736187,
-0.1785757393,
0.0489... |
https://github.com/huggingface/datasets/issues/4241 | NonMatchingChecksumError when attempting to download GLUE | This appears to work. Thank you!
On Wed, Apr 27, 2022, 1:18 PM Steven Liu ***@***.***> wrote:
> Hi :)
>
> I think your issue may be related to the older nlp library. I was able to
> download glue with the latest version of datasets. Can you try updating
> with:
>
> pip install -U datasets
>
> Then you can download:
>... | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to ... | 110 | NonMatchingChecksumError when attempting to download GLUE
## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redown... | [
0.101099439,
-0.0903691947,
0.0413370356,
0.3583336473,
0.1278837472,
0.0999461636,
-0.1866522282,
0.334582448,
0.4952836633,
-0.1053451374,
-0.1394158453,
0.1311296523,
-0.0658304244,
-0.1457225382,
0.0314150602,
0.1970958412,
-0.0847944096,
0.1571736187,
-0.1785757393,
0.0489... |
https://github.com/huggingface/datasets/issues/4238 | Dataset caching policy | Hi @loretoparisi, thanks for reporting.
There is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode="force_redownload")`.
Please, let me know if this fixes your problem.
I can confirm you that your dataset loads w... | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | 87 | Dataset caching policy
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/... | [
0.0123936096,
0.2971666455,
-0.0013778415,
0.2417485565,
0.2187064886,
0.223115772,
0.2011682987,
0.3845502436,
0.047587432,
-0.114898473,
-0.0990488455,
-0.2024639547,
-0.043204993,
-0.1892879307,
-0.1385836452,
0.0513460636,
0.111731194,
-0.0098836049,
0.2588098049,
0.0954153... |
https://github.com/huggingface/datasets/issues/4238 | Dataset caching policy | @albertvillanova thank you, it seems it still does not work using:
```python
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
download_mode="force_redownload"
)
```
[This](https://colab.research.googl... | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | 125 | Dataset caching policy
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/... | [
0.0123936096,
0.2971666455,
-0.0013778415,
0.2417485565,
0.2187064886,
0.223115772,
0.2011682987,
0.3845502436,
0.047587432,
-0.114898473,
-0.0990488455,
-0.2024639547,
-0.043204993,
-0.1892879307,
-0.1385836452,
0.0513460636,
0.111731194,
-0.0098836049,
0.2588098049,
0.0954153... |
https://github.com/huggingface/datasets/issues/4238 | Dataset caching policy | SOLVED! The problem was the with the file itself, using caching parameter helped indeed.
Thanks for helping! | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | 17 | Dataset caching policy
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/... | [
0.0123936096,
0.2971666455,
-0.0013778415,
0.2417485565,
0.2187064886,
0.223115772,
0.2011682987,
0.3845502436,
0.047587432,
-0.114898473,
-0.0990488455,
-0.2024639547,
-0.043204993,
-0.1892879307,
-0.1385836452,
0.0513460636,
0.111731194,
-0.0098836049,
0.2588098049,
0.0954153... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Thanks for reporting. I understand it's an error in the dataset script. To reproduce:
```python
>>> import datasets as ds
>>> split_names = ds.get_dataset_split_names("mozilla-foundation/common_voice_8_0", use_auth_token="**********")
Downloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββ... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 151 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Thanks for reporting. I understand it's an error in the dataset script. To reproduce:
```python
>>> import datasets as ds
>>> split_names = ds.get_dataset_split_names("mozilla-foundation/common_voice... | [
-0.6561425328,
-0.004003874,
0.0054231901,
0.1310704499,
0.2854442596,
0.2682386339,
0.4165343642,
0.3768597543,
0.1568085253,
0.109484069,
-0.2961753607,
-0.0419526771,
-0.2016284615,
-0.134046182,
0.0574726276,
0.144657135,
-0.1379270107,
0.2300461233,
0.4713993371,
-0.108759... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.
Unfortunately I'm not able to reproduce the error.
I think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https://huggingface.co/datasets/mozilla-foundation/comm... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 70 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.
Unfortunately I'm not able to reproduce the error.
I think the error has to do with authentication with `huggingface_... | [
-0.3818594217,
-0.2081790715,
0.0576239452,
0.2595281303,
0.2959937453,
0.3319201767,
0.3500491679,
0.1816910803,
0.2192064673,
0.0894685835,
-0.5308890939,
-0.1895972043,
-0.136817351,
-0.2363358289,
0.1168588027,
0.0489676781,
-0.0462487526,
0.1060479879,
0.5292467475,
-0.134... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!
```python
>>> from huggingface_hub import HfApi, HfFolder
>>> auth_token = "hf_app_******"
>>> t = HfApi().whoami(auth_token)... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 105 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!
```python
>... | [
-0.2868961394,
-0.1114272773,
0.054076951,
0.0521808229,
0.2599704862,
0.1541742533,
0.5229559541,
0.2172734588,
0.2576426268,
0.1800563484,
-0.5013438463,
-0.2839031816,
-0.0835964233,
-0.018772684,
0.026273394,
0.0892328024,
-0.0147990091,
0.0623915866,
0.4439668953,
-0.20272... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | We can workaround this with
```python
email = HfApi().whoami(auth_token).get("email", "system@huggingface.co")
```
in the common voice scripts | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 16 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
We can workaround this with
```python
email = HfApi().whoami(auth_token).get("email", "system@huggingface.co")
```
in the common voice scripts | [
-0.4095231295,
0.0020364104,
0.0243135188,
0.146247983,
0.2652965188,
0.2654664814,
0.4983243048,
0.3220642507,
0.3277715743,
0.2375538051,
-0.5881981254,
-0.2214309275,
0.0245003197,
0.0251563713,
0.2471105903,
0.0379921906,
-0.1972731203,
0.1432468742,
0.3389151096,
-0.202468... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.