html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/3859 | Unable to dowload big_patent (FileNotFoundError) | Hi @slvcsl, thanks for reporting.
Yesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.
https://pypi.org/project/datasets/#history
Please, feel free to update `datasets` library to the latest version:
```shell
pip install -U datasets
```
And then you should ... | ## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundError Traceback (most recent... | 115 | Unable to dowload big_patent (FileNotFoundError)
## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundE... | [
-0.4091825485,
-0.1292832047,
0.0641916394,
0.3027555048,
0.05070775,
0.0250500049,
0.4245097637,
0.5200846791,
0.5669476986,
-0.1224020794,
-0.2795088589,
0.0148194069,
-0.0127642527,
0.0916133448,
-0.0751800314,
0.06534224,
0.0613416284,
-0.0415812433,
-0.1401766986,
-0.04314... |
https://github.com/huggingface/datasets/issues/3857 | Order of dataset changes due to glob.glob. | I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.
Note that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()` | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.g... | 48 | Order of dataset changes due to glob.glob.
## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are... | [
-0.0024274425,
-0.301030159,
-0.1189568639,
0.1030937284,
0.189602375,
-0.3035665154,
0.2473371625,
-0.0992215946,
0.0932720304,
0.1500161737,
-0.2698247135,
-0.0234251171,
0.0356517807,
0.0828588158,
0.2031215429,
0.0783792064,
0.0705652609,
0.1712586135,
-0.299810499,
-0.1218... |
https://github.com/huggingface/datasets/issues/3855 | Bad error message when loading private dataset | We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)
We can indeed reformulate this and add the "If this is a private repository,..." part ! | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command th... | 46 | Bad error message when loading private dataset
## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("Ne... | [
-0.1339562535,
-0.027228307,
0.0709063485,
0.5178710818,
0.1427748352,
0.0169358999,
0.5494259,
0.1765478402,
0.2100434303,
0.1455811113,
-0.4457783699,
0.0675091892,
-0.030568175,
-0.0527621508,
-0.0051457593,
-0.1111859009,
-0.0238318685,
0.2474310398,
0.0074130311,
-0.230928... |
https://github.com/huggingface/datasets/issues/3854 | load only England English dataset from common voice english dataset | Hi @amanjaiswal777,
First note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.
Currently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation
For example... | training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Prop... | 172 | load only England English dataset from common voice english dataset
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England Engl... | [
-0.1626533717,
0.0066458895,
-0.0680129081,
-0.0724874213,
0.1917739064,
0.144611612,
0.5200453401,
0.2401534915,
0.0146083059,
0.086058341,
-0.4945827723,
-0.260288775,
0.2044369876,
-0.156159386,
-0.2082676888,
-0.0381301083,
-0.1385680586,
0.2314248234,
0.318788588,
-0.35091... |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | Hi @lemoner20, thanks for reporting.
I'm sorry but I cannot reproduce your problem:
```python
In [1]: from datasets import load_dataset, load_metric, Audio
...: raw_datasets = load_dataset("superb", "ks", split="train")
...: print(raw_datasets[0]["audio"])
Downloading builder script: 30.2kB [00:00, 13.0MB... | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
prin... | 178 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb",... | [
-0.3769887686,
-0.1154066622,
-0.0308203232,
0.3962250352,
0.4579704106,
0.0699847117,
0.2902419865,
0.3150895238,
-0.0530284159,
0.1748975515,
-0.535605073,
0.3172013462,
-0.2974061668,
0.0095627727,
-0.0179305263,
-0.4144314528,
-0.0624870807,
0.160766542,
-0.3674670458,
0.08... |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | @albertvillanova Thanks for your reply. The environment info below
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-4.19.91-007.ali4000.alios7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyArrow version: 6.0.1 | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
prin... | 27 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb",... | [
-0.3769887686,
-0.1154066622,
-0.0308203232,
0.3962250352,
0.4579704106,
0.0699847117,
0.2902419865,
0.3150895238,
-0.0530284159,
0.1748975515,
-0.535605073,
0.3172013462,
-0.2974061668,
0.0095627727,
-0.0179305263,
-0.4144314528,
-0.0624870807,
0.160766542,
-0.3674670458,
0.08... |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | Thanks @lemoner20,
I cannot reproduce your issue in datasets version 1.18.3 either.
Maybe redownloading the data file may work if you had already cached this dataset previously. Could you please try passing "force_redownload"?
```python
raw_datasets = load_dataset("superb", "ks", split="train", download_mode="f... | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
prin... | 40 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb",... | [
-0.3769887686,
-0.1154066622,
-0.0308203232,
0.3962250352,
0.4579704106,
0.0699847117,
0.2902419865,
0.3150895238,
-0.0530284159,
0.1748975515,
-0.535605073,
0.3172013462,
-0.2974061668,
0.0095627727,
-0.0179305263,
-0.4144314528,
-0.0624870807,
0.160766542,
-0.3674670458,
0.08... |
https://github.com/huggingface/datasets/issues/3848 | NonMatchingChecksumError when checksum is None | Hi @jxmorris12, thanks for reporting.
The objective of `verify_checksums` is to check that both checksums are equal. Therefore if one is None and the other is non-None, they are not equal, and the function accordingly raises a NonMatchingChecksumError. That behavior is expected.
The question is: how did you gener... | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | 157 | NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum... | [
-0.2320743799,
0.0051179919,
-0.0630755574,
0.0114171607,
0.0393702351,
-0.0018278478,
0.0637268722,
0.2139837444,
0.2239054292,
0.2407787144,
0.2089290619,
-0.171506241,
-0.1793844551,
-0.1686307192,
-0.3194901943,
0.5820994973,
0.0091048246,
0.1567642093,
-0.0063250153,
-0.11... |
https://github.com/huggingface/datasets/issues/3848 | NonMatchingChecksumError when checksum is None | Thanks @albertvillanova!
That's fine. I did run that command when I was adding a new dataset. Maybe because the command crashed in the middle, the checksum wasn't stored properly. I don't know where the bug is happening. But either (i) `verify_checksums` should properly handle this edge case, where the passed checks... | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | 133 | NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum... | [
-0.2320743799,
0.0051179919,
-0.0630755574,
0.0114171607,
0.0393702351,
-0.0018278478,
0.0637268722,
0.2139837444,
0.2239054292,
0.2407787144,
0.2089290619,
-0.171506241,
-0.1793844551,
-0.1686307192,
-0.3194901943,
0.5820994973,
0.0091048246,
0.1567642093,
-0.0063250153,
-0.11... |
https://github.com/huggingface/datasets/issues/3848 | NonMatchingChecksumError when checksum is None | Hi @jxmorris12,
Definitely, your `dataset_infos.json` was corrupted (and wrongly contains expected None checksum).
While we further investigate how this can happen and fix it, feel free to delete your `dataset_infos.json` file and recreate it with:
```shell
datasets-cli test <your-dataset-folder> --save_infos ... | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | 77 | NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum... | [
-0.2320743799,
0.0051179919,
-0.0630755574,
0.0114171607,
0.0393702351,
-0.0018278478,
0.0637268722,
0.2139837444,
0.2239054292,
0.2407787144,
0.2089290619,
-0.171506241,
-0.1793844551,
-0.1686307192,
-0.3194901943,
0.5820994973,
0.0091048246,
0.1567642093,
-0.0063250153,
-0.11... |
https://github.com/huggingface/datasets/issues/3848 | NonMatchingChecksumError when checksum is None | At a higher level, also note that we are preparing the release of `datasets` version 2.0, and some docs are being updated...
In order to add a dataset, I think the most updated instructions are in our official documentation pages: https://huggingface.co/docs/datasets/share | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | 41 | NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum... | [
-0.2320743799,
0.0051179919,
-0.0630755574,
0.0114171607,
0.0393702351,
-0.0018278478,
0.0637268722,
0.2139837444,
0.2239054292,
0.2407787144,
0.2089290619,
-0.171506241,
-0.1793844551,
-0.1686307192,
-0.3194901943,
0.5820994973,
0.0091048246,
0.1567642093,
-0.0063250153,
-0.11... |
https://github.com/huggingface/datasets/issues/3848 | NonMatchingChecksumError when checksum is None | Hi @jxmorris12, we have discovered the bug why `None` checksums wrongly appeared when generating the `dataset_infos.json` file:
- #3892
The fix will be accessible once this PR merged. And we are planning to do our 2.0 release today.
We are also working on updating all our docs for our release today. | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | 51 | NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum... | [
-0.2320743799,
0.0051179919,
-0.0630755574,
0.0114171607,
0.0393702351,
-0.0018278478,
0.0637268722,
0.2139837444,
0.2239054292,
0.2407787144,
0.2089290619,
-0.171506241,
-0.1793844551,
-0.1686307192,
-0.3194901943,
0.5820994973,
0.0091048246,
0.1567642093,
-0.0063250153,
-0.11... |
https://github.com/huggingface/datasets/issues/3847 | Datasets' cache not re-used | <s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.
To fix this we can try making the order of the splits deterministic for map.</... | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with ca... | 55 | Datasets' cache not re-used
## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tok... | [
-0.1935835779,
0.2576530874,
0.0603295378,
0.1439300627,
0.0886533633,
-0.0120368982,
0.2372733504,
0.2307136357,
-0.1739624739,
-0.0923761576,
0.0944176763,
0.4197394252,
0.1473710686,
-0.4988313913,
-0.0980983377,
0.0087584257,
0.1556413919,
0.2962575853,
0.2818207443,
0.0167... |
https://github.com/huggingface/datasets/issues/3847 | Datasets' cache not re-used | Actually this is not because of the order of the splits, but most likely because the tokenizer used to process the second split is in a state that has been modified by the first split.
Therefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen th... | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with ca... | 81 | Datasets' cache not re-used
## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tok... | [
-0.1986140907,
0.2466831207,
0.0669297799,
0.1183694527,
0.0800370649,
0.0124684758,
0.27197963,
0.2143706977,
-0.1935912222,
-0.1080537736,
0.0886221528,
0.4167428613,
0.1809748411,
-0.5377520919,
-0.0903876498,
-0.0427592769,
0.1412890553,
0.2917448282,
0.2959862947,
0.064042... |
https://github.com/huggingface/datasets/issues/3847 | Datasets' cache not re-used | Sorry didn't have the bandwidth to take care of this yet - will re-assign when I'm diving into it again ! | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with ca... | 21 | Datasets' cache not re-used
## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tok... | [
-0.1496466398,
0.314725399,
0.0716685429,
0.1290515065,
0.1573753804,
0.0476938263,
0.2401155531,
0.2392692864,
-0.1712928414,
-0.1187091321,
0.1149297059,
0.4237597883,
0.1328928918,
-0.5434716344,
-0.0387901217,
0.0273851324,
0.1781152636,
0.2702391148,
0.2457362562,
0.016713... |
https://github.com/huggingface/datasets/issues/3832 | Making Hugging Face the place to go for Graph NNs datasets | It will be indeed really great to add support to GNN datasets. Big :+1: for this initiative. | Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are some datasets worth integrating into the Hugging Face hub?
In... | 17 | Making Hugging Face the place to go for Graph NNs datasets
Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are som... | [
-0.0234233961,
-0.2333473265,
-0.1094195098,
-0.0523622707,
-0.0944151431,
-0.0435832962,
-0.0281152762,
0.109921284,
0.3266025186,
0.1850886643,
-0.1312944889,
-0.0844398364,
-0.2320246398,
0.4511050284,
0.4168960452,
-0.1545025557,
0.3398024738,
0.110463649,
0.2192728966,
-0.... |
https://github.com/huggingface/datasets/issues/3832 | Making Hugging Face the place to go for Graph NNs datasets | @napoles-uach identifies the [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression).
Added to the Tasks in the initial issue. | Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are some datasets worth integrating into the Hugging Face hub?
In... | 22 | Making Hugging Face the place to go for Graph NNs datasets
Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are som... | [
-0.0233607776,
-0.2450852394,
-0.0979368985,
-0.0057547195,
-0.1070017889,
-0.0364056304,
-0.0293638613,
0.1181742325,
0.3275017738,
0.1441016197,
-0.1514695883,
-0.1023743451,
-0.2231216431,
0.4196002781,
0.441177547,
-0.1565524787,
0.3331486881,
0.0985798538,
0.161082238,
-0.... |
https://github.com/huggingface/datasets/issues/3832 | Making Hugging Face the place to go for Graph NNs datasets | Great initiative! Let's keep this issue for these 3 datasets, but moving forward maybe let's create a new issue per dataset :rocket: great work @napoles-uach and @omarespejel! | Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are some datasets worth integrating into the Hugging Face hub?
In... | 27 | Making Hugging Face the place to go for Graph NNs datasets
Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are som... | [
-0.0186396949,
-0.2302571982,
-0.0931067839,
-0.0285473689,
-0.1222266257,
-0.0637294203,
0.005651847,
0.1148463711,
0.3288173378,
0.1683222055,
-0.129839614,
-0.0805969685,
-0.2303717583,
0.4487098455,
0.445459038,
-0.1293425858,
0.3466871679,
0.1192114279,
0.2285960168,
0.003... |
https://github.com/huggingface/datasets/issues/3831 | when using to_tf_dataset with shuffle is true, not all completed batches are made | Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`. | ## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing
## Expected resul... | 36 | when using to_tf_dataset with shuffle is true, not all completed batches are made
## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.g... | [
-0.2101608664,
-0.1565533429,
0.1008584052,
0.1095013022,
0.3623255193,
0.1720545441,
0.0999238864,
0.3753473163,
-0.4610953629,
0.4258148968,
-0.1732449383,
0.3522307575,
0.0507792793,
-0.0305818375,
0.087232247,
0.1303320527,
0.4016360641,
-0.04188063,
-0.3483789861,
-0.25077... |
https://github.com/huggingface/datasets/issues/3830 | Got error when load cnn_dailymail dataset | Was able to reproduce the issue on Colab; full logs below.
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()
1 import datasets
... | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirec... | 201 | Got error when load cnn_dailymail dataset
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da58... | [
-0.3095068634,
0.2098640352,
0.0282028429,
0.4526914656,
0.2786128521,
0.0300756563,
0.6285235286,
-0.0142820952,
0.0125053413,
0.2989321351,
-0.300666064,
0.2344312817,
-0.4058233202,
0.1356498748,
-0.0893944651,
0.0908746794,
0.0534629188,
0.1627378315,
-0.0562224016,
0.03207... |
https://github.com/huggingface/datasets/issues/3830 | Got error when load cnn_dailymail dataset | Hi @jon-tow, thanks for reporting. And hi @dynamicwebpaige, thanks for your investigation.
This issue was already reported
- #3784
and its root cause is a change in the Google Drive service. See:
- #3786
We have already fixed it. See:
- #3787
We are planning to make a patch release today (indeed, we ... | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirec... | 129 | Got error when load cnn_dailymail dataset
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da58... | [
-0.1300634295,
0.2179487199,
0.022833854,
0.3490008712,
0.2328821421,
0.1309629232,
0.5199148655,
-0.0454560816,
0.0150798168,
0.2289736271,
-0.1221061125,
0.1092229187,
-0.3064778149,
0.3377999365,
-0.1343074143,
0.0932021588,
0.1022375524,
0.0350106098,
0.0430555679,
0.018973... |
https://github.com/huggingface/datasets/issues/3829 | [📄 Docs] Create a `datasets` performance guide. | Hi ! Yes this is definitely something we'll explore, since optimizing processing pipelines can be challenging and because performance is key here: we want anyone to be able to play with large-scale datasets more easily.
I think we'll start by documenting the performance of the dataset transforms we provide, and then... | ## Brief Overview
Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with... | 60 | [📄 Docs] Create a `datasets` performance guide.
## Brief Overview
Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug,... | [
-0.3263997734,
-0.067276217,
-0.1491247565,
0.0796751156,
0.1978186816,
0.1243428662,
-0.0817878321,
0.4515270591,
-0.2909837365,
0.0455370173,
-0.0428125598,
0.2855461538,
-0.3281092048,
0.4099058807,
0.2774599791,
-0.3230414093,
-0.1052235588,
0.0541575961,
-0.0467099287,
0.2... |
https://github.com/huggingface/datasets/issues/3828 | The Pile's _FEATURE spec seems to be incorrect | Hi @dlwh, thanks for reporting.
Please note, that the source data files for "all" config are different from the other configurations.
The "all" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/
All data examples contain a "meta" dict with a single "pile_set_name" key:
... | ## Describe the bug
If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py:
For "all"
* the pile_set_name is never set for data
* there's actually an id field inside of "meta"
For subcorpora pubmed_central and hacker_news:
* the meta is specified to be a string, but it's actually a di... | 219 | The Pile's _FEATURE spec seems to be incorrect
## Describe the bug
If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py:
For "all"
* the pile_set_name is never set for data
* there's actually an id field inside of "meta"
For subcorpora pubmed_central and hacker_news:
* the meta is... | [
-0.0291287582,
-0.0090103708,
0.0997361913,
0.1450235695,
0.3251174688,
-0.0145726111,
0.5119669437,
0.2684321404,
-0.2205349058,
0.1227269843,
-0.1169431806,
0.3082396984,
0.2474182248,
0.3708834946,
-0.0749322399,
0.1385888159,
0.2165843546,
-0.1402181834,
0.1202760488,
-0.16... |
https://github.com/huggingface/datasets/issues/3823 | 500 internal server error when trying to open a dataset composed of Zarr stores | Hi @jacobbieker, thanks for reporting!
I have transferred this issue to our Hub team and they are investigating it. I keep you informed. | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of da... | 23 | 500 internal server error when trying to open a dataset composed of Zarr stores
## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet,... | [
-0.3984364271,
0.1099737883,
0.0756590962,
0.2678704262,
0.3084725738,
-0.0378655791,
0.3065187633,
0.1853011996,
0.1879682988,
0.1837238818,
-0.3976977468,
0.252807498,
0.0605284162,
0.3931142986,
-0.1333766282,
0.1691716313,
-0.0300634038,
0.0935496241,
-0.1518300325,
0.07442... |
https://github.com/huggingface/datasets/issues/3823 | 500 internal server error when trying to open a dataset composed of Zarr stores | Hi @jacobbieker, we are investigating this issue on our side and we'll see if we can fix it, but please note that your repo is considered problematic for git. Here are the results of running https://github.com/github/git-sizer on it:
```
Processing blobs: 147448
Processing trees: 27 ... | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of da... | 142 | 500 internal server error when trying to open a dataset composed of Zarr stores
## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet,... | [
-0.3984364271,
0.1099737883,
0.0756590962,
0.2678704262,
0.3084725738,
-0.0378655791,
0.3065187633,
0.1853011996,
0.1879682988,
0.1837238818,
-0.3976977468,
0.252807498,
0.0605284162,
0.3931142986,
-0.1333766282,
0.1691716313,
-0.0300634038,
0.0935496241,
-0.1518300325,
0.07442... |
https://github.com/huggingface/datasets/issues/3823 | 500 internal server error when trying to open a dataset composed of Zarr stores | Hi, thanks for getting back to me so quick! And yeah, I figured that was probably the problem. I was going to try to delete the repo, but couldn't through the website, so if that's the easiest way to solve it, I can regenerate the dataset in a different format with less tiny files, and you guys can delete the repo as i... | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of da... | 97 | 500 internal server error when trying to open a dataset composed of Zarr stores
## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet,... | [
-0.3984364271,
0.1099737883,
0.0756590962,
0.2678704262,
0.3084725738,
-0.0378655791,
0.3065187633,
0.1853011996,
0.1879682988,
0.1837238818,
-0.3976977468,
0.252807498,
0.0605284162,
0.3931142986,
-0.1333766282,
0.1691716313,
-0.0300634038,
0.0935496241,
-0.1518300325,
0.07442... |
https://github.com/huggingface/datasets/issues/3823 | 500 internal server error when trying to open a dataset composed of Zarr stores | Hi @jacobbieker,
For future use cases, our Hub team is still pondering whether to limit the maximum number of files per repo to avoid technical issues...
On the meantime, they have made a fix and your dataset is working: https://huggingface.co/datasets/openclimatefix/mrms | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of da... | 40 | 500 internal server error when trying to open a dataset composed of Zarr stores
## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet,... | [
-0.3984364271,
0.1099737883,
0.0756590962,
0.2678704262,
0.3084725738,
-0.0378655791,
0.3065187633,
0.1853011996,
0.1879682988,
0.1837238818,
-0.3976977468,
0.252807498,
0.0605284162,
0.3931142986,
-0.1333766282,
0.1691716313,
-0.0300634038,
0.0935496241,
-0.1518300325,
0.07442... |
https://github.com/huggingface/datasets/issues/3822 | Add Biwi Kinect Head Pose Database | Official dataset location : https://icu.ee.ethz.ch/research/datsets.html
In the "Biwi Kinect Head Pose Database" section, I do not find any information regarding "Downloading the dataset." . Do we mail the authors regarding this ?
I found the dataset on Kaggle : [Link](https://www.kaggle.com/kmader/biwi-kinect-head... | ## Adding a Dataset
- **Name:** Biwi Kinect Head Pose Database
- **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles.
- ... | 85 | Add Biwi Kinect Head Pose Database
## Adding a Dataset
- **Name:** Biwi Kinect Head Pose Database
- **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of ... | [
0.0941344947,
0.0514211543,
-0.0333518162,
0.2105914205,
-0.376519382,
0.4054467082,
0.1258431822,
0.1778405458,
0.2764806747,
0.1559714377,
-0.0106995897,
-0.179984346,
0.1005421579,
0.1961019039,
0.1111485437,
-0.0773014128,
0.0561022758,
-0.0838734061,
0.2623635232,
-0.21032... |
https://github.com/huggingface/datasets/issues/3820 | `pubmed_qa` checksum mismatch | Hi @jon-tow, thanks for reporting.
This issue was already reported and its root cause is a change in the Google Drive service. See:
- #3786
We have already fixed it. See:
- #3787
We are planning to make a patch release today.
In the meantime, you can get this fix by installing our library from the GitHu... | ## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
except Exception as e:
print(e... | 109 | `pubmed_qa` checksum mismatch
## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
exc... | [
-0.0026249781,
0.1502344012,
-0.0901137516,
0.1827036589,
0.303847611,
-0.052858986,
0.2118268609,
0.3324446678,
0.0343839563,
0.1883317232,
-0.0766895041,
0.1153618395,
0.3035593629,
0.0669022799,
0.0802896619,
0.0465416312,
0.0497064032,
-0.0326129794,
-0.017424209,
-0.020458... |
https://github.com/huggingface/datasets/issues/3818 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | Hi, thanks for reporting! We can add a `sources: datasets.Value("string")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR? | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metr... | 30 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
**Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datas... | [
-0.4188570678,
0.2724249363,
-0.0092955474,
-0.1095037237,
0.1228538156,
-0.0708656386,
0.2320678234,
0.2151487321,
-0.1837127805,
0.2203431129,
-0.2901642919,
0.2855444551,
0.1346973926,
0.0324126184,
0.0896022469,
-0.3678044677,
0.0503035076,
0.1437144727,
-0.0944617316,
0.15... |
https://github.com/huggingface/datasets/issues/3818 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | Hi Mario,
Thanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:
```
features=datasets.Features(
{
"sources": datasets.Value("string", id="sequence"),
"predictions": datasets.Value("string", ... | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metr... | 133 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
**Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datas... | [
-0.4188570678,
0.2724249363,
-0.0092955474,
-0.1095037237,
0.1228538156,
-0.0708656386,
0.2320678234,
0.2151487321,
-0.1837127805,
0.2203431129,
-0.2901642919,
0.2855444551,
0.1346973926,
0.0324126184,
0.0896022469,
-0.3678044677,
0.0503035076,
0.1437144727,
-0.0944617316,
0.15... |
https://github.com/huggingface/datasets/issues/3818 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | The `Metric` class has been modified recently to support this use-case, but the `add_batch` + `compute` pattern still doesn't work correctly. I'll open a PR. | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metr... | 25 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
**Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datas... | [
-0.4188570678,
0.2724249363,
-0.0092955474,
-0.1095037237,
0.1228538156,
-0.0708656386,
0.2320678234,
0.2151487321,
-0.1837127805,
0.2203431129,
-0.2901642919,
0.2855444551,
0.1346973926,
0.0324126184,
0.0896022469,
-0.3678044677,
0.0503035076,
0.1437144727,
-0.0944617316,
0.15... |
https://github.com/huggingface/datasets/issues/3813 | Add MetaShift dataset | I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you. | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/... | 25 | Add MetaShift dataset
## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https:/... | [
-0.0806370303,
-0.0981557444,
-0.1193474159,
0.0553121865,
0.3078885972,
-0.1292070001,
0.0209145881,
-0.0056071752,
0.1324978173,
-0.047706224,
0.0888745859,
-0.0332621783,
-0.2426599562,
0.2571821213,
0.0837727934,
-0.1063219234,
0.1582997441,
-0.0881593153,
-0.102236174,
-0.... |
https://github.com/huggingface/datasets/issues/3813 | Add MetaShift dataset | I've started working on adding this dataset. I require some inputs on the following :
Ref for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_metashift_dataset/datasets/metashift/metashift.py)
1. The dataset does not have a typical - train/test/val split. What do we do for the _split_generat... | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/... | 156 | Add MetaShift dataset
## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https:/... | [
-0.0206870008,
-0.1677132696,
-0.0047672577,
0.0912782624,
0.3228175342,
0.0555274636,
0.2001730204,
0.3405480087,
-0.0290364902,
0.0101518529,
0.0464552864,
0.1604141742,
-0.1158286855,
0.3556983769,
0.2689017355,
-0.2559720874,
0.0439613946,
0.1324379891,
-0.1062055752,
-0.39... |
https://github.com/huggingface/datasets/issues/3813 | Add MetaShift dataset | Hi ! Thanks for adding this dataset :) Let me answer your questions:
1. in this case you can put everything in the "train" split
2. Yes you can copy the script (provided you also include the MIT license of the code in the file header for example). Though we ideally try to not create new directories nor files when g... | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/... | 152 | Add MetaShift dataset
## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https:/... | [
-0.1713750213,
-0.085492596,
-0.1429995298,
0.1220314503,
0.2432813346,
0.0219148509,
0.5056431293,
0.2920890749,
0.1803073138,
0.1295088828,
-0.1259089857,
0.3187862933,
-0.3814881146,
0.3359365761,
0.2225328386,
-0.3426352739,
-0.0453623272,
-0.0169433877,
-0.4357622564,
-0.2... |
https://github.com/huggingface/datasets/issues/3813 | Add MetaShift dataset | Great. This is helpful. Thanks @lhoestq .
Regarding Point 2, I'll try using yield instead of creating the directories and see if its feasible. selected_classes config sounds good. | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/... | 28 | Add MetaShift dataset
## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https:/... | [
-0.3895172775,
-0.0509430468,
-0.1791785359,
0.1671943367,
0.3236090839,
-0.2388732135,
0.2462814003,
0.2632824481,
0.1178998202,
0.3443958163,
-0.1489603668,
0.1883939207,
-0.1833632588,
0.2654520869,
-0.0889893472,
-0.1708958149,
0.0098175835,
0.0219870359,
-0.4625787735,
-0.... |
https://github.com/huggingface/datasets/issues/3809 | Checksums didn't match for datasets on Google Drive | Hi @muelletm, thanks for reporting.
This issue was already reported and its root cause is a change in the Google Drive service. See:
- #3786
We have already fixed it. See:
- #3787
Until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:
```sh... | ## Describe the bug
Datasets hosted on Google Drive do not seem to work right now.
Loading them fails with a checksum error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
for dataset in ["head_qa", "yelp_review_full"]:
try:
load_dataset(dataset)
except Exception as excep... | 103 | Checksums didn't match for datasets on Google Drive
## Describe the bug
Datasets hosted on Google Drive do not seem to work right now.
Loading them fails with a checksum error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
for dataset in ["head_qa", "yelp_review_full"]:
try:
... | [
-0.2004948854,
0.1379584223,
-0.0935079902,
0.3851814568,
0.1761959642,
0.0669628307,
0.2867941856,
0.2709661126,
0.3133203685,
0.0352175608,
-0.2274777889,
-0.1330032945,
0.0922784582,
0.474976927,
-0.0118396608,
0.0952155665,
0.2046585679,
-0.2078854144,
-0.1187623516,
-0.076... |
https://github.com/huggingface/datasets/issues/3808 | Pre-Processing Cache Fails when using a Factory pattern | Ok - this is still an issue but I believe the root cause is different than I originally thought. I'm now able to get caching to work consistently with the above example as long as I fix the python hash seed `export PYTHONHASHSEED=1234` | ## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bug
```python
def preprocess_function_factory(augmenta... | 43 | Pre-Processing Cache Fails when using a Factory pattern
## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bu... | [
0.0456963852,
0.1248688474,
-0.0719592646,
0.0149069047,
0.1925102174,
-0.2458245307,
0.3033486307,
0.2505906522,
-0.1709433496,
-0.0935291499,
0.3817081153,
0.1764904559,
-0.0578019097,
0.1592745632,
-0.028878998,
0.2823105156,
0.10336487,
0.0291133188,
0.137967512,
0.10074186... |
https://github.com/huggingface/datasets/issues/3808 | Pre-Processing Cache Fails when using a Factory pattern | Hi!
Yes, our hasher should work with decorators. For instance, this dummy example:
```python
def f(arg):
def f1(ex):
return {"a": ex["col1"] + arg}
return f1
```
gives the same hash across different Python sessions (`datasets.fingerprint.Hasher.hash(f("string1")` returns `"408c9059f89dbd6c"` ... | ## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bug
```python
def preprocess_function_factory(augmenta... | 76 | Pre-Processing Cache Fails when using a Factory pattern
## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bu... | [
-0.0620613508,
0.1236684546,
-0.0902339444,
0.0906542242,
0.1542715728,
-0.239038527,
0.3725143671,
0.1926223636,
-0.2423103899,
0.1052962765,
0.4187673926,
0.1436964422,
-0.2627628148,
0.0627106354,
-0.0321045108,
0.2822058797,
0.0156636052,
0.1075155661,
0.1408684105,
0.15105... |
https://github.com/huggingface/datasets/issues/3807 | NonMatchingChecksumError in xcopa dataset | Hi @afcruzs-ms, thanks for opening this separate issue for your problem.
The root problem in the other issue (#3792) was a change in the service of Google Drive.
But in your case, the `xcopa` dataset is not hosted on Google Drive. Therefore, the root cause should be a different one.
Let me look at it... | ## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be loaded correctly.
## Actual results
Fails ... | 55 | NonMatchingChecksumError in xcopa dataset
## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be lo... | [
-0.1377996951,
0.1573126018,
-0.0541850813,
0.1144582108,
0.0819765702,
-0.0180446301,
0.179325521,
0.2524214983,
0.2048439831,
0.1784925908,
0.1794354916,
0.108845681,
0.2294791341,
0.0596829355,
-0.2593921125,
0.4399090409,
0.0824891105,
-0.0298399944,
0.0789924189,
-0.052730... |
https://github.com/huggingface/datasets/issues/3807 | NonMatchingChecksumError in xcopa dataset | @afcruzs-ms, I'm not able to reproduce the issue you reported:
```python
In [1]: from datasets import load_dataset
...: dataset = load_dataset("xcopa", "it")
Downloading builder script: 5.21kB [00:00, 2.75MB/s] ... | ## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be loaded correctly.
## Actual results
Fails ... | 134 | NonMatchingChecksumError in xcopa dataset
## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be lo... | [
-0.1630280316,
0.0196220763,
-0.0325452425,
0.1558003873,
0.0957748666,
0.0057952977,
-0.053499531,
0.3396128714,
0.1612436324,
0.1424918771,
0.0743285194,
0.2626977563,
0.2402617633,
-0.0465791449,
-0.2578478456,
0.5077952147,
0.0184930693,
0.0516374335,
-0.0691120252,
-0.0200... |
https://github.com/huggingface/datasets/issues/3804 | Text builder with custom separator line boundaries | Hi ! Interresting :)
Could you give more details on what kind of separators you would like to use instead ? | **Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line bound... | 21 | Text builder with custom separator line boundaries
**Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `spli... | [
-0.5105410218,
0.0709034652,
-0.1626145095,
0.1696151048,
-0.163427785,
-0.1772655547,
0.5803586841,
0.2326490432,
0.0360539034,
0.1621047556,
0.0860821605,
0.1326724887,
-0.0996955633,
0.3431820273,
0.0255903751,
-0.2169986069,
-0.2595998943,
0.3339993656,
0.1591252685,
0.2512... |
https://github.com/huggingface/datasets/issues/3804 | Text builder with custom separator line boundaries | Ok I see, maybe there can be a `sep` parameter to allow users to specify what line/paragraph separator they'd like to use | **Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line bound... | 22 | Text builder with custom separator line boundaries
**Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `spli... | [
-0.4740251005,
0.032496769,
-0.1551950425,
0.123829253,
-0.1683721095,
-0.1784347743,
0.6166985631,
0.2291660756,
0.02003536,
0.2053591609,
0.0969709009,
0.1254847795,
-0.0567381233,
0.3359189332,
0.0088384207,
-0.211790204,
-0.2759430408,
0.3095940053,
0.1675323099,
0.25148305... |
https://github.com/huggingface/datasets/issues/3804 | Text builder with custom separator line boundaries | Thanks for requesting this enhancement. We have recently found a somehow related issue with another dataset:
- #3704
Let me make a PR proposal. | **Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line bound... | 24 | Text builder with custom separator line boundaries
**Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `spli... | [
-0.4503132403,
0.0747961923,
-0.1288341433,
0.2104608119,
-0.1013789847,
-0.18994008,
0.6217502952,
0.2234782875,
0.0198681597,
0.1670010388,
0.0657704398,
0.1434783041,
-0.0569494888,
0.3055794239,
0.0026092897,
-0.2434875071,
-0.2827084661,
0.2772156298,
0.1772622764,
0.23593... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | Same issue with `dataset = load_dataset("dbpedia_14")`
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 16 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.3649029136,
0.2177056968,
-0.1216040552,
0.3296720088,
0.1562197655,
-0.0154084451,
0.2091432065,
0.3848863244,
0.0918231234,
0.0392360054,
-0.0339915752,
0.0775723755,
0.0966653079,
0.1967636943,
-0.0677056685,
0.1878269464,
0.2903296053,
0.012819198,
-0.2121758312,
-0.1110... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | I think this is a side-effect of #3787. The checksums won't match because the URLs have changed. @rafikg @Y0mingZhang, while this is fixed, maybe you can load the datasets as such:
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]", ignore_verifications=True)`
`dataset = load_dataset("... | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 53 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.2045929879,
0.0499821641,
-0.0409405828,
0.3436928689,
0.3134645522,
-0.0442949049,
0.131984666,
0.3513386846,
0.19094491,
0.1721866876,
-0.0847845525,
0.0363085344,
0.2098386139,
0.3477252424,
0.0919780657,
-0.1228861585,
0.144551903,
-0.1180747747,
-0.1569825262,
-0.056123... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | Hi! Installing the `datasets` package from master (`pip install git+https://github.com/huggingface/datasets.git`) and then redownloading the datasets with `download_mode` set to `force_redownload` (e.g. `dataset = load_dataset("dbpedia_14", download_mode="force_redownload")`) should fix the issue. | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 29 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.3843276799,
0.1996199936,
-0.1146312505,
0.3184933364,
0.3418926001,
-0.0549485311,
0.200097084,
0.4718802869,
0.2195709348,
0.1365334988,
-0.0418279245,
0.1422183663,
0.0903076679,
0.2628116906,
-0.1015564501,
0.0394689292,
0.1799626201,
-0.0340435393,
-0.2078184932,
-0.076... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | Hi @rafikg and @Y0mingZhang, thanks for reporting.
Indeed it seems that Google Drive changed their way to access their data files. We have recently handled that change:
- #3787
but it will be accessible to users only in our next release of the `datasets` version.
- Note that our latest release (version 1.18.3) ... | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 117 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.2806977928,
0.1842425019,
-0.0431707464,
0.2063535154,
0.3406103849,
0.1340312958,
0.2423329055,
0.3014310002,
0.2834569216,
0.2202183157,
-0.0751204342,
-0.1180787981,
0.04707174,
0.4092033803,
-0.1184175238,
0.0272027273,
0.1333744973,
-0.151443243,
0.0380049944,
0.0343717... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | @albertvillanova by running:
```
pip install git+https://github.com/huggingface/datasets#egg=datasets
data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]", download_mode="force_redownload", ignore_verifications=True)
```
I had a pickle error **UnpicklingError: invalid load key, '<'** ... | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 59 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.2528680563,
0.1900031567,
-0.0636426732,
0.314026624,
0.0766305625,
-0.0666098222,
0.1628495455,
0.2162663639,
0.5342547894,
0.1638112515,
-0.0650474429,
0.4586707056,
-0.0562347211,
0.4272357523,
0.033776667,
0.0887800977,
0.151322335,
-0.0304990485,
-0.2259759903,
-0.10456... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | This issue impacts many more datasets than the ones mention in this thread. Can we post # of downloads for each dataset by day (by successes and failures)? If so, it should be obvious which ones are failing. | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 38 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.3122647703,
0.2210709304,
-0.0906830654,
0.2768563628,
0.2292595357,
-0.0886908248,
0.2925544381,
0.3545769453,
0.1828720719,
0.2068864405,
0.1387791336,
-0.0111588202,
0.0016289384,
0.1781060994,
-0.0697038993,
0.1052992493,
0.3057503998,
-0.0951659307,
-0.1911695302,
-0.06... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | I can see this problem too in xcopa, unfortunately installing the latest master (1.18.4.dev0) doesn't work, @albertvillanova .
```
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
Throws
```
in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 ... | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 74 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.3634573221,
0.2217582166,
-0.0390356928,
0.1520665884,
0.1960245818,
-0.0046920897,
-0.016568698,
0.4806540906,
0.0124927619,
0.1067954972,
0.0860225409,
0.1600408703,
0.2092339993,
0.1283581704,
-0.1657698601,
0.3392213881,
0.2206757218,
0.0590420403,
-0.2113899887,
-0.0897... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | Hi @rafikg, I think that is another different issue. Let me check it...
I guess maybe you are using a different Python version that the one the dataset owner used to create the pickle file... | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 35 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.1837106794,
0.1279495656,
-0.0920477808,
0.2596702278,
0.250797838,
-0.0687125847,
0.1381366551,
0.4400169253,
0.4555806816,
0.1520557553,
0.0166322291,
0.4371010959,
-0.0223840903,
0.2981742322,
-0.1252115518,
0.0508493707,
0.2061954588,
0.0645043701,
-0.3080223501,
-0.1772... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | @kwchurch the datasets impacted for this specific issue are the ones which are hosted at Google Drive. | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 17 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.27233392,
0.2513685524,
-0.0825369731,
0.3092967868,
0.1599651575,
0.0824245438,
0.3484656811,
0.2746692896,
0.1714856476,
0.1092719883,
0.0492886454,
-0.0215489492,
0.0095687564,
0.1690100133,
0.0618317537,
0.1273817122,
0.3405562937,
-0.0278939791,
-0.0939082131,
-0.107855... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | @afcruzs-ms I think your issue is a different one, because that dataset is not hosted at Google Drive. Would you mind open another issue for that other problem, please? Thanks! :) | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 31 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.338044852,
0.2343494892,
-0.1028938442,
0.2488979846,
0.2559024394,
0.0522294119,
0.2988871634,
0.3855467439,
0.1882091761,
0.1851580292,
0.0704605952,
-0.0634310022,
0.0281986184,
0.2961125076,
-0.1326325685,
0.0332363434,
0.247208029,
-0.0536762178,
-0.0801993161,
-0.09378... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | @albertvillanova just to let you know that I tried it locally and on colab and it is the same error | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 20 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.3873740733,
0.2373123765,
-0.0824100897,
0.3719082177,
0.2275023013,
-0.0049790237,
0.1810005456,
0.3520689011,
0.1753032058,
0.1124963388,
-0.0698555857,
0.0891029462,
0.0238753799,
0.2454140037,
-0.0977155194,
0.0805002674,
0.2709171176,
0.0180112012,
-0.2669120729,
-0.038... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | There are many many datasets on HugggingFace that are receiving this checksum error. Some of these datasets are very popular. There must be a way to track these errors, or to do regression testing. We don't want to catch each of these errors on each dataset, one at a time. | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 50 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.0298189111,
-0.2039603293,
0.021170957,
0.4256593585,
0.290446192,
0.0431604907,
0.1427193582,
0.1986541897,
0.2408197373,
0.2792515159,
0.1181162894,
-0.1039062366,
-0.1298159063,
0.2413554192,
0.2599909902,
0.1816885918,
0.3475727141,
-0.0357042402,
-0.1994270831,
-0.01760... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | @rafikg I am sorry, but I can't reproduce your issue. For me it works OK for all languages. See: https://colab.research.google.com/drive/1yIcLw1it118-TYE3ZlFmV7gJcsF6UCsH?usp=sharing | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 20 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.3512475193,
0.1659745574,
-0.0933461264,
0.2689795792,
0.3992979228,
-0.1021842584,
0.2309536934,
0.5082909465,
0.2222323269,
0.0973307639,
-0.0228513796,
0.1041472405,
0.1113994345,
0.2219659686,
-0.1851460338,
0.0578541346,
0.2173553109,
-0.0570551865,
-0.253385514,
-0.153... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | @kwchurch the PR #3787 fixes this issue (generated by a change in Google Drive service) for ALL datasets with this issue. Once we make our next library release (in a couple of days), the fix will be accessible to all users that update our library from PyPI. | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 47 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.2429649383,
0.3284002244,
-0.0522092618,
0.2091575712,
0.2420453131,
0.0454644859,
0.4633969069,
0.3967258036,
0.1045799553,
0.163781777,
0.0637601614,
0.0762733221,
0.1462747306,
0.2466353625,
0.0161854606,
0.037288975,
0.3186243176,
-0.013912512,
0.0529289022,
-0.048424992... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | By the way, @rafikg, I discovered the URL for Spanish was wrong. I've created a PR to fix it:
- #3806 | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 21 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.2747673392,
0.301027149,
-0.1136037931,
0.2822934687,
0.2694525123,
0.0328381695,
0.1257394701,
0.2994842231,
0.188693285,
0.1289109886,
-0.0369257182,
0.1175513491,
0.0572667196,
-0.0736763328,
-0.0110750813,
0.1204035059,
0.293395102,
-0.1240722165,
-0.1735296547,
-0.18825... |
https://github.com/huggingface/datasets/issues/3792 | Checksums didn't match for dataset source | I have the same problem with "wider_face" dataset. It seems that "load_dataset" function can not download the dataset from google drive.
| ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | 21 | Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for data... | [
-0.387026459,
0.140017435,
-0.0685568601,
0.4098217487,
0.1368342638,
0.1154296622,
0.3463055491,
0.2142237276,
0.3116495013,
0.1409804523,
-0.0570875891,
-0.0907565504,
0.1048549861,
0.3408809006,
0.1011659354,
0.0062464438,
0.3096239567,
-0.0447073132,
-0.0985486656,
-0.13698... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | I see two options:
1. drop the "dev" keyword since it can be considered too generic
2. improve the pattern to something more reasonable, e.g. asking for a separator before and after "dev"
```python
["*[ ._-]dev[ ._-]*", "dev[ ._-]*"]
```
I think 2. is nice. If we agree on this one we can even decide to require ... | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 67 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
0.00019797,
-0.0223320331,
-0.0004890826,
0.0312808827,
0.2191147953,
-0.200498417,
0.2411285788,
0.5713531971,
-0.2572956681,
-0.1379632652,
0.1421869248,
0.1751895398,
-0.1989684403,
0.2185006291,
0.0153300706,
0.1457435042,
0.0395135172,
0.3197079599,
0.1515252441,
-0.116634... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | Yes, I had something like that on mind: "dev" not being part of a word.
```
"[^a-zA-Z]dev[^a-zA-Z]" | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 17 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
-0.0880025104,
0.0279189255,
-0.027738167,
0.1770329475,
0.1313008666,
-0.2075342238,
0.1929709613,
0.5046431422,
-0.2156709433,
-0.0228311941,
0.059249986,
0.23835361,
-0.0268458817,
0.1029326171,
0.052912578,
0.3532885015,
0.1087847278,
0.3332918882,
0.05071152,
-0.1112701148... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | Is there a reason why we want that regex? It feels like something that'll still be an issue for some weird case. "my_dataset_dev" doesn't match your regex, "my_dataset_validation" doesn't either ... Why not always "train" unless specified? | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 37 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
-0.127433449,
0.0463366397,
0.0139259519,
-0.067193687,
0.1907559037,
-0.0963911787,
0.5693849325,
0.5711494088,
-0.2740080357,
-0.0007301643,
0.0903312713,
0.0587165616,
-0.1791050434,
0.1223726124,
-0.0053799199,
0.2239863425,
0.0502594262,
0.451158762,
0.2187022418,
-0.06833... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | The regex is needed as part of our effort to make datasets configurable without code. In particular we define some generic dataset repository structures that users can follow
> ```
> "[^a-zA-Z]*dev[^a-zA-Z]*"
> ```
unfortunately our glob doesn't support "^":
https://github.com/fsspec/filesystem_spec/blob/3e... | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 41 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
-0.2896316051,
0.1827499419,
0.0006295583,
0.1775887907,
0.165361762,
-0.2184229046,
0.2060532868,
0.5261226892,
-0.1824196428,
-0.0074534472,
-0.0446323864,
0.0348761715,
-0.055594068,
0.2794866562,
-0.05539551,
0.3991408348,
-0.0021834683,
0.3991588354,
-0.0350153521,
-0.0411... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | > "my_dataset_dev" doesn't match your regex, "my_dataset_validation" doesn't either ... Why not always "train" unless specified?
And `my_dataset_dev.foo` would match the pattern, and we also have the same pattern but for the "validation" keyword so `my_dataset_validation.foo` would work too | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 39 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
-0.2208982408,
0.085726805,
-0.034769129,
0.1135791466,
0.1809140295,
-0.0996529832,
0.3154057562,
0.5672200918,
-0.1893925816,
-0.0838951394,
0.1645898968,
0.208800897,
-0.1298322976,
0.1652339101,
-0.0917871296,
0.2014841288,
-0.0027632217,
0.3495737314,
0.0560584106,
-0.0532... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | > The regex is needed as part of our effort to make datasets configurable without code
This feels like coding with the filename ^^' | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 24 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
-0.2534263432,
0.137466535,
-0.063353397,
0.1325034201,
0.1538755149,
-0.1122745723,
0.2060326934,
0.5831243396,
-0.1359725744,
-0.0397470072,
0.017430231,
0.1966339797,
-0.0144108729,
0.1911669821,
-0.0082555879,
0.3433130682,
-0.0202845931,
0.3917874992,
0.1030329913,
-0.0527... |
https://github.com/huggingface/datasets/issues/3788 | Only-data dataset loaded unexpectedly as validation split | This is still much easier than having to write a full dataset script right ? :p | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | 16 | Only-data dataset loaded unexpectedly as validation split
## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`... | [
-0.3309621811,
0.0703856722,
-0.1060947105,
0.1277537197,
0.192970559,
-0.1053088158,
0.280169934,
0.5553884506,
-0.0073975134,
0.1058118641,
0.1400856078,
0.269392252,
0.0448251739,
0.2297161669,
0.014962987,
0.283696115,
-0.036891643,
0.3354948163,
-0.103077665,
-0.0959970728... |
https://github.com/huggingface/datasets/issues/3786 | Bug downloading Virus scan warning page from Google Drive URLs | Once the PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:
```shell
pip install git+https://github.com/huggingface/datasets#egg=datasets
```
Then, if you had previously tried to load the data and got the checksum error,... | ## Describe the bug
Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself.
See:
- #3758
- #3773
- #3784
| 77 | Bug downloading Virus scan warning page from Google Drive URLs
## Describe the bug
Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself.
See:
- #3758
- #3773
- #3784
Once the PR merged into master and until o... | [
-0.1411157697,
0.0148484437,
-0.0489370674,
0.2452072054,
0.2049025148,
0.0718998313,
0.1173322275,
0.2077255547,
0.3714020252,
0.1962259561,
-0.0660761669,
-0.1588381827,
0.1040249243,
0.3147511184,
0.0350951627,
0.0102435993,
-0.0000052156,
-0.0964626968,
0.1057200879,
0.1173... |
https://github.com/huggingface/datasets/issues/3784 | Unable to Download CNN-Dailymail Dataset | Glad to help @albertvillanova! Just fine-tuning the PR, will comment once I am able to get it up and running 😀 | ## Describe the bug
I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening:
- The dataset sits in Google Drive, and both the CNN and DM datasets are large.
- Google is unable to scan the folder for viruses, **so the link which would originally download the dat... | 21 | Unable to Download CNN-Dailymail Dataset
## Describe the bug
I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening:
- The dataset sits in Google Drive, and both the CNN and DM datasets are large.
- Google is unable to scan the folder for viruses, **so the ... | [
-0.090477027,
0.0638364255,
-0.0650130361,
0.3773009479,
0.2484724224,
0.2562351823,
0.3084614575,
0.1352739632,
-0.1773421466,
0.1261962652,
-0.0741629973,
-0.0172218177,
-0.2994994819,
0.0711786002,
0.08124163,
-0.0136416834,
-0.0586402081,
-0.2016190141,
-0.0625697374,
-0.07... |
https://github.com/huggingface/datasets/issues/3778 | Not be able to download dataset - "Newsroom" | Hi @Darshan2104, thanks for reporting.
Please note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.
Apparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.nlp.cornell.ed... | Hello,
I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**!
For manually, Link is also didn't work! It is sawing some ad or something!
If anybody has solved this issue please help me out or if somebody has this dataset please share your google driv... | 64 | Not be able to download dataset - "Newsroom"
Hello,
I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**!
For manually, Link is also didn't work! It is sawing some ad or something!
If anybody has solved this issue please help me out or if somebody... | [
-0.2675008774,
0.3221225739,
-0.0054875379,
0.2053161711,
0.1594457477,
0.2996753454,
0.0258155148,
0.3109139204,
0.1214273348,
-0.0735157728,
-0.0882345513,
-0.1992523074,
-0.1055620387,
0.2716182768,
0.2644298971,
-0.192416802,
-0.0738764182,
-0.0039597843,
0.3711563349,
0.03... |
https://github.com/huggingface/datasets/issues/3776 | Allow download only some files from the Wikipedia dataset | Hi @jvanz, thank you for your proposal.
In fact, we are aware that it is very common the problem you mention. Because of that, we are currently working in implementing a new version of wikipedia on the Hub, with all data preprocessed (no need to use Apache Beam), from where you will be able to use `data_files` to lo... | **Is your feature request related to a problem? Please describe.**
The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb).
**Describe the solution you'd like**
I... | 70 | Allow download only some files from the Wikipedia dataset
**Is your feature request related to a problem? Please describe.**
The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memo... | [
-0.2116208971,
0.0504853055,
-0.1068822518,
0.3503215313,
-0.0453148112,
0.1091505811,
0.023389224,
0.6189420223,
0.4418042302,
0.1188722625,
-0.095998548,
0.1892415732,
0.0542925335,
-0.0541350581,
0.0770271868,
-0.1883919984,
-0.0352482721,
0.0193045847,
-0.0228975657,
-0.185... |
https://github.com/huggingface/datasets/issues/3773 | Checksum mismatch for the reddit_tifu dataset | @albertvillanova Thank you for the fast response! However I am still getting the same error:
Downloading: 2.23kB [00:00, ?B/s]
Traceback (most recent call last):
File "C:\Users\Anna\PycharmProjects\summarization\main.py", line 17, in <module>
dataset = load_dataset('reddit_tifu', 'long')
File "C:\Users\A... | ## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded and cached locally.
## Actual results
File "... | 112 | Checksum mismatch for the reddit_tifu dataset
## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded an... | [
-0.2612463832,
0.3078198731,
-0.0347205102,
0.2932790816,
0.2877587378,
-0.0153631,
0.0771327019,
0.4611118734,
0.0030726143,
0.0499788523,
-0.0786116123,
0.1378066242,
0.423630923,
0.1407800168,
0.0997835696,
0.0043242374,
0.1717595756,
-0.050648015,
-0.0111731989,
0.023379590... |
https://github.com/huggingface/datasets/issues/3773 | Checksum mismatch for the reddit_tifu dataset | Hi @anna-kay, I'm sorry I didn't clearly explain the details to you:
- the error has been fixed in our `master` branch on GitHub: https://github.com/huggingface/datasets/commit/8ae21bf6a77175dc803ce2f1b93d18b8fbf45586
- the fix will not be accessible to users in PyPI until our next release of the `datasets` library
... | ## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded and cached locally.
## Actual results
File "... | 79 | Checksum mismatch for the reddit_tifu dataset
## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded an... | [
-0.1900519282,
0.1128553227,
-0.0797176585,
0.1964417845,
0.3247572184,
-0.0811685622,
0.1428150088,
0.4333872795,
-0.0206153635,
0.0688304678,
-0.0649134368,
0.1989684403,
0.4393796325,
0.1943296492,
0.0490709879,
0.0611957014,
0.1334781796,
-0.0215585493,
-0.0648425519,
0.032... |
https://github.com/huggingface/datasets/issues/3769 | `dataset = dataset.map()` causes faiss index lost | Hi ! Indeed `map` is dropping the index right now, because one can create a dataset with more or fewer rows using `map` (and therefore the index might not be relevant anymore)
I guess we could check the resulting dataset length, and if the user hasn't changed the dataset size we could keep the index, what do you thi... | ## Describe the bug
assigning the resulted dataset to original dataset causes lost of the faiss index
## Steps to reproduce the bug
`my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure
```python
self.dataset.add_faiss_index('embeddings')
self.dataset.list_indexes()
# ['embeddin... | 60 | `dataset = dataset.map()` causes faiss index lost
## Describe the bug
assigning the resulted dataset to original dataset causes lost of the faiss index
## Steps to reproduce the bug
`my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure
```python
self.dataset.add_faiss_index('emb... | [
-0.1911484003,
-0.0692250505,
-0.0801808313,
0.2704692185,
0.0758192837,
0.3400020301,
0.219092533,
0.2129372805,
0.7335777879,
0.2396205366,
-0.0468455814,
0.4691384137,
0.0690716878,
-0.3312780559,
-0.0963518769,
0.1843682826,
0.2518341243,
0.1134471446,
0.0620871782,
-0.1465... |
https://github.com/huggingface/datasets/issues/3763 | It's not possible download `20200501.pt` dataset | Hi @jvanz, thanks for reporting.
Please note that Wikimedia website does not longer host Wikipedia dumps for so old dates.
For a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/
You can load for example `20220220` `pt` Wikipedia:
```python
dataset = load_datase... | ## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
```
## Expected results
I expect t... | 49 | It's not possible download `20200501.pt` dataset
## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='Dir... | [
-0.3223347962,
-0.1315499097,
-0.0983523205,
0.3211584985,
0.2504530549,
0.1500659734,
0.049723804,
0.4589627385,
0.092519775,
0.1612986773,
0.1003177986,
0.3547094464,
0.017580485,
0.1351880282,
-0.0279240459,
-0.4124293029,
0.0741506591,
0.0216097198,
-0.1954785287,
0.0282271... |
https://github.com/huggingface/datasets/issues/3763 | It's not possible download `20200501.pt` dataset | > ```python
> dataset = load_dataset("wikipedia", language="pt", date="20220220", beam_runner="DirectRunner")
> ```
Thank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.
I've tried something similar chan... | ## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
```
## Expected results
I expect t... | 85 | It's not possible download `20200501.pt` dataset
## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='Dir... | [
-0.3223347962,
-0.1315499097,
-0.0983523205,
0.3211584985,
0.2504530549,
0.1500659734,
0.049723804,
0.4589627385,
0.092519775,
0.1612986773,
0.1003177986,
0.3547094464,
0.017580485,
0.1351880282,
-0.0279240459,
-0.4124293029,
0.0741506591,
0.0216097198,
-0.1954785287,
0.0282271... |
https://github.com/huggingface/datasets/issues/3762 | `Dataset.class_encode` should support custom class names | Hi @Dref360, thanks a lot for your proposal.
It totally makes sense to have more flexibility when class encoding, I agree.
You could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_encode_column`... | I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing.
https://github.com/huggingface/datasets/blob/master/sr... | 62 | `Dataset.class_encode` should support custom class names
I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexi... | [
0.2142816782,
0.1198458225,
0.037366908,
0.1522609591,
0.4958594441,
0.2522532046,
0.2703294754,
0.1193562597,
-0.0080505349,
0.1555159837,
0.0997143164,
0.6819637418,
-0.2078863084,
0.1502426863,
-0.0019546931,
-0.381578207,
-0.0769403875,
0.1150641069,
0.0382727794,
-0.126637... |
https://github.com/huggingface/datasets/issues/3762 | `Dataset.class_encode` should support custom class names | Hi @Dref360! You can use [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/package_reference/main_classes.html#datasets.Dataset.align_labels_with_mapping) after `Dataset.class_encode_column` to assign a different mapping of labels to ids.
@albertvillanova I'd like to avoid adding more... | I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing.
https://github.com/huggingface/datasets/blob/master/sr... | 47 | `Dataset.class_encode` should support custom class names
I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexi... | [
0.1994744539,
-0.011508828,
0.0501547083,
0.1014922857,
0.3611449003,
0.2609219551,
0.216787979,
0.0255505983,
0.0818392411,
0.1173066869,
0.0086970218,
0.6595108509,
-0.1692370623,
0.1270328313,
0.0471642241,
-0.3817214668,
-0.0801749378,
0.0698388517,
0.1967081875,
-0.0349071... |
https://github.com/huggingface/datasets/issues/3761 | Know your data for HF hub | Hi @Muhtasham you should take a look at https://huggingface.co/blog/data-measurements-tool and accompanying demo app at https://huggingface.co/spaces/huggingface/data-measurements-tool
We would be interested in your feedback. cc @meg-huggingface @sashavor @yjernite | **Is your feature request related to a problem? Please describe.**
Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues.
**Describe the solution you'd like**
Something like https://knowyourdata.withgoogle.com/ for HF hub | 26 | Know your data for HF hub
**Is your feature request related to a problem? Please describe.**
Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues.
**Describe the solution you'd like**
Something like https://knowyourdata.withg... | [
-0.3963146806,
-0.0736383349,
-0.1237246543,
0.1050194204,
0.0630106181,
-0.0145143811,
0.2699152827,
0.1131124198,
0.1065848023,
0.2636403143,
-0.3267324865,
-0.1532196701,
0.0692762211,
0.4747779667,
-0.0134505834,
-0.1142124981,
-0.2324718088,
0.1944876462,
0.0803603828,
-0.... |
https://github.com/huggingface/datasets/issues/3760 | Unable to view the Gradio flagged call back dataset | Hi @kingabzpro.
I think you need to create a loading script that creates the dataset from the CSV file and the image paths.
As example, you could have a look at the Food-101 dataset: https://huggingface.co/datasets/food101
- Loading script: https://huggingface.co/datasets/food101/blob/main/food101.py
Once the... | ## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://h... | 54 | Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The datas... | [
-0.223268941,
0.2325305343,
-0.0110236229,
0.2548342645,
0.3060579896,
0.0665645227,
0.2348560393,
0.1697472632,
0.2746147215,
-0.0533599406,
-0.1771443486,
-0.0381931439,
0.0027865963,
0.4267014563,
0.2530887127,
0.0522453599,
-0.153499186,
0.1182006449,
0.1701424569,
0.012586... |
https://github.com/huggingface/datasets/issues/3760 | Unable to view the Gradio flagged call back dataset | @albertvillanova I don't think this is the issue. I have created another dataset with similar files and format and it works. https://huggingface.co/datasets/kingabzpro/savtadepth-flags-V2 | ## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://h... | 22 | Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The datas... | [
-0.2301910073,
0.2327462584,
0.0179117899,
0.2585228384,
0.3205563724,
0.0420707278,
0.2842933834,
0.1822576374,
0.1492127776,
-0.0256717186,
-0.2615859509,
-0.042633757,
-0.0275215041,
0.2540270984,
0.2023277283,
0.1306377351,
-0.1105965972,
0.0396924876,
0.4807675779,
-0.0408... |
https://github.com/huggingface/datasets/issues/3760 | Unable to view the Gradio flagged call back dataset | Yes, you are right, that was not the issue.
Just take into account that sometimes the viewer can take some time until it shows the preview of the dataset.
After some time, yours is finally properly shown: https://huggingface.co/datasets/kingabzpro/savtadepth-flags | ## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://h... | 38 | Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The datas... | [
-0.2451807112,
0.2172700167,
-0.0100676967,
0.1485990882,
0.3047355413,
-0.0232698452,
0.3388379514,
0.1904138774,
0.1421711594,
-0.0403717235,
-0.2485891879,
-0.0416520536,
0.0424754992,
0.3279090822,
0.1481870711,
0.0801423565,
-0.1649891138,
-0.0025607969,
0.3318659961,
-0.0... |
https://github.com/huggingface/datasets/issues/3760 | Unable to view the Gradio flagged call back dataset | The problem was resolved by deleted the dataset and creating new one with similar name and then clicking on flag button. | ## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://h... | 21 | Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The datas... | [
-0.2355168015,
0.2943939269,
0.0060478165,
0.2653230429,
0.2650963366,
0.063776195,
0.3267031908,
0.2095132619,
0.1700176895,
-0.0364791006,
-0.2569232583,
0.0024184452,
-0.0134502603,
0.2589089572,
0.2008358538,
0.1542172432,
-0.1398296505,
0.0278553609,
0.4430781901,
-0.03080... |
https://github.com/huggingface/datasets/issues/3758 | head_qa file missing | We usually find issues with files hosted at Google Drive...
In this case we download the Google Drive Virus scan warning instead of the data file. | ## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name="en")
```
## Expec... | 26 | head_qa file missing
## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name... | [
-0.1781004518,
-0.0522131622,
-0.0929788575,
0.2617266476,
0.3136710227,
0.3797074258,
0.194157809,
0.4621250331,
0.1194370538,
0.2630708516,
0.1950955391,
0.0892443806,
0.1379038543,
-0.0750948042,
0.3748985529,
-0.2883261144,
0.0637705922,
0.2063560784,
-0.2365850359,
-0.0722... |
https://github.com/huggingface/datasets/issues/3756 | Images get decoded when using `map()` with `input_columns` argument on a dataset | Hi! If I'm not mistaken, this behavior is intentional, but I agree it could be more intuitive.
@albertvillanova Do you remember why you decided not to decode columns in the `Audio` feature PR when `input_columns` is not `None`? IMO we should decode those columns, and we don't even have to use lazy structures here be... | ## Describe the bug
The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances.
However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image ... | 68 | Images get decoded when using `map()` with `input_columns` argument on a dataset
## Describe the bug
The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances.
However, when calling `map(... | [
-0.0711214542,
-0.1942403615,
-0.0474800169,
0.4888620079,
0.6329941154,
0.1564103067,
0.3611623347,
0.2503560781,
0.1487884969,
0.057597544,
-0.1527496129,
0.6728749871,
0.0777886584,
-0.4829203784,
-0.2325557768,
-0.2273544073,
0.1242544949,
0.2687866688,
-0.2417322844,
-0.15... |
https://github.com/huggingface/datasets/issues/3756 | Images get decoded when using `map()` with `input_columns` argument on a dataset | I think I excluded to decorate the function when `input_columns` were passed as a quick fix for some non-passing tests:
- https://github.com/huggingface/datasets/pull/2324/commits/9d7c3e8fa53e23ec636859b4407eeec904b1b3f9
That PR was quite complex and I decided to focus on the main feature requests, leaving refinem... | ## Describe the bug
The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances.
However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image ... | 92 | Images get decoded when using `map()` with `input_columns` argument on a dataset
## Describe the bug
The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances.
However, when calling `map(... | [
-0.0805242062,
-0.2047774345,
-0.0520251952,
0.4860448241,
0.5501019955,
0.1830330342,
0.336760968,
0.3066670299,
0.1687326133,
-0.024067495,
-0.0182181764,
0.6525537968,
0.0815848932,
-0.3871563375,
-0.2242294103,
-0.2102147043,
0.1314524561,
0.278316319,
-0.2537817657,
-0.146... |
https://github.com/huggingface/datasets/issues/3755 | Cannot preview dataset | Thanks for reporting. The dataset viewer depends on some backend treatments, and for now, they might take some hours to get processed. We're working on improving it. | ## Dataset viewer issue for '*rubrix/news*'
**Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page*
Cannot see the dataset preview:
```
Status code: 400
Exception: Status400Error
Message: Not found. Cache is waiting to be refreshed.
```
Am I the one who added thi... | 27 | Cannot preview dataset
## Dataset viewer issue for '*rubrix/news*'
**Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page*
Cannot see the dataset preview:
```
Status code: 400
Exception: Status400Error
Message: Not found. Cache is waiting to be refreshed.
```
A... | [
-0.303106606,
-0.2469096929,
-0.0138340583,
0.3150568604,
0.1146640852,
0.3597428501,
0.1062545702,
0.4012054801,
0.0818885267,
0.0675458312,
-0.1898425519,
0.1830478162,
0.0273651537,
-0.1792662144,
0.1113655269,
-0.0800133497,
0.0722588599,
0.1015293971,
-0.2903139293,
0.1101... |
https://github.com/huggingface/datasets/issues/3753 | Expanding streaming capabilities | Cool ! `filter` will be very useful. There can be a filter that you can apply on a streaming dataset:
```python
load_dataset(..., streaming=True).filter(lambda x: x["lang"] == "sw")
```
Otherwise if you want to apply a filter on the source files that are going to be used for streaming, the logic has to be impIeme... | Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
- filter a dataset for specific li... | 160 | Expanding streaming capabilities
Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
... | [
-0.6017636657,
-0.3428730369,
-0.1772567183,
-0.1217319965,
0.095033817,
-0.029509645,
0.0078959418,
0.4271878004,
0.2996822,
0.1316366345,
-0.246367231,
0.1956981421,
-0.3171160221,
0.5236800313,
0.1842929125,
-0.2316042334,
0.0046889852,
-0.063887991,
0.0321256034,
-0.0199252... |
https://github.com/huggingface/datasets/issues/3753 | Expanding streaming capabilities | Regarding conversion, I'd also ask for some kind of equivalent to `save_to_disk` for an `IterableDataset`.
Similarly to the streaming to hub idea, my use case would be to define a sequence of dataset transforms via `.map()`, using an `IterableDataset` as the input (so processing could start without doing whole downl... | Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
- filter a dataset for specific li... | 60 | Expanding streaming capabilities
Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
... | [
-0.6017636657,
-0.3428730369,
-0.1772567183,
-0.1217319965,
0.095033817,
-0.029509645,
0.0078959418,
0.4271878004,
0.2996822,
0.1316366345,
-0.246367231,
0.1956981421,
-0.3171160221,
0.5236800313,
0.1842929125,
-0.2316042334,
0.0046889852,
-0.063887991,
0.0321256034,
-0.0199252... |
https://github.com/huggingface/datasets/issues/3753 | Expanding streaming capabilities | That makes sense @athewsey , thanks for the suggestion :)
Maybe instead of the `to_disk` we could simply have `save_to_disk` instead:
```python
streaming_dataset.save_to_disk("path/to/my/dataset/dir")
on_disk_dataset = load_from_disk("path/to/my/dataset/dir")
in_memory_dataset = Dataset.from_list(list(streamin... | Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
- filter a dataset for specific li... | 38 | Expanding streaming capabilities
Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
... | [
-0.6017636657,
-0.3428730369,
-0.1772567183,
-0.1217319965,
0.095033817,
-0.029509645,
0.0078959418,
0.4271878004,
0.2996822,
0.1316366345,
-0.246367231,
0.1956981421,
-0.3171160221,
0.5236800313,
0.1842929125,
-0.2316042334,
0.0046889852,
-0.063887991,
0.0321256034,
-0.0199252... |
https://github.com/huggingface/datasets/issues/3739 | Pubmed dataset does not work in streaming mode | Thanks for reporting, @abhi-mosaic (related to #3655).
Please note that `xml.etree.ElementTree.parse` already supports streaming:
- #3476
No need to refactor to use `open`/`xopen`. Is is enough with importing the package `as ET` (instead of `as etree`). | ## Describe the bug
Trying to use the `pubmed` dataset with `streaming=True` fails.
## Steps to reproduce the bug
```python
import datasets
pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True)
print (next(iter(pubmed_train)))
```
## Expected results
I would expect to see the first ... | 36 | Pubmed dataset does not work in streaming mode
## Describe the bug
Trying to use the `pubmed` dataset with `streaming=True` fails.
## Steps to reproduce the bug
```python
import datasets
pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True)
print (next(iter(pubmed_train)))
```
## E... | [
-0.2567091584,
-0.107677184,
0.0392324403,
0.0182673112,
0.3081787229,
-0.029495867,
0.3082026243,
0.311396122,
-0.0702125132,
0.0274794362,
0.1031503826,
0.2790320516,
-0.1545047164,
0.1329765916,
0.0741169825,
-0.1817835569,
0.1080832854,
0.1717883945,
0.0063677346,
-0.006676... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | Note that we might change the heuristic and create a different config per file, at least in that case. | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 19 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.1471759826,
-0.6080194712,
0.0020138533,
0.2507560253,
0.3594651222,
0.0139179258,
0.1687220484,
0.2110687047,
0.0835059062,
0.081744872,
-0.3420369327,
0.1702126265,
-0.1189065054,
0.3036901653,
0.0937540382,
-0.3056590855,
0.1595321149,
0.1109992564,
-0.2961662412,
-0.0981... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | Hi @severo, thanks for reporting.
Yes, this happens because when non-streaming, a cast of all data is done in order to "concatenate" it all into a single dataset (thus the error), while this casting is not done while yielding item by item in streaming mode.
Maybe in streaming mode we should keep the schema (infer... | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 74 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.1483692676,
-0.5971010923,
0.0425095372,
0.3026606143,
0.4449424148,
0.0343069099,
0.0938618034,
0.2274336815,
-0.0198628455,
0.1207607985,
-0.2812955081,
0.163243264,
-0.1369572133,
0.265286684,
0.0114956172,
-0.3093685806,
0.1662296057,
0.1124369428,
-0.3150340915,
-0.0867... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | Why do we want to concatenate the files? Is it the expected behavior for most datasets that lack a script and dataset info? | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 23 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.1206140816,
-0.5383890271,
0.0515677445,
0.1705874205,
0.3425413668,
0.117400825,
0.2200133502,
0.198875457,
-0.087235041,
0.1894494742,
-0.3170535564,
0.1043425351,
-0.0130866589,
0.287632972,
0.053394869,
-0.3094178438,
0.1563249081,
0.2425494343,
-0.1608396471,
-0.1402349... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | These files are two different dataset configurations since they don't share the same schema.
IMO the streaming mode should fail in this case, as @albertvillanova said.
There is one challenge though: inferring the schema from the first example is not robust enough in the general case - especially if some fields ar... | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 67 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.241260007,
-0.538382411,
0.0270497445,
0.2054672241,
0.2709464729,
0.0090806121,
0.192628637,
0.0634075105,
0.0702458024,
-0.0795701519,
-0.301170975,
0.068355456,
-0.2525920868,
0.367969811,
0.1206432506,
-0.3124425709,
0.160000369,
-0.0171230529,
-0.3173450232,
-0.06001068... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | OK. So, if we make the streaming also fail, the dataset https://huggingface.co/datasets/huggingface/transformers-metadata will never be [viewable](https://github.com/huggingface/datasets-preview-backend/issues/144) (be it using streaming or fallback to downloading the files), right?
| See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 27 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.1478598416,
-0.4494974315,
-0.0741416663,
0.2103790045,
0.2105887681,
0.0522013754,
0.0881873369,
0.0913033187,
-0.0407919623,
0.0463269465,
-0.3037816584,
0.0575060919,
-0.2236426175,
0.2117999494,
0.1254727542,
-0.2901279628,
0.1195499301,
-0.0062803929,
-0.2550476789,
-0.... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | Yes, until we have a way for the user to specify explicitly that those two files are different configurations.
We can maybe have some rule to detect this automatically, maybe checking the first line of each file ? That would mean that for dataset of 10,000+ files we would have to verify every single one of them just... | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 77 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.3082469404,
-0.5184177756,
-0.0382429473,
0.3086874187,
0.2399062365,
-0.177745387,
0.1599005163,
0.228596434,
0.0310773533,
0.1484985352,
-0.3134310544,
0.0760293528,
-0.2314170152,
0.3043789566,
0.0689230561,
-0.1778725237,
0.100117445,
0.0215647575,
-0.2695710659,
0.03813... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | i think requiring the user to specify that those two files are different configurations is in that case perfectly reasonable.
(Maybe at some point we could however detect this type of case and prompt them to define a config mapping etc) | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 41 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.2233445644,
-0.4214346409,
-0.0021988221,
0.1339175403,
0.2301209122,
-0.003551353,
0.2808351815,
0.1670939922,
0.0998133272,
0.2093735933,
-0.2885890603,
0.1102920473,
-0.1343374401,
0.2366632819,
-0.053317789,
-0.2150019407,
-0.0486764759,
0.2093239278,
-0.2974544764,
0.09... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | OK, so, before closing the issue, what do you think should be done?
> Maybe in streaming mode we should keep the schema (inferred from the first item) and throw an exception if a subsequent item does not conform to the inferred schema?
or nothing? | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 45 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.1384042948,
-0.4757509232,
0.0538570359,
0.2831663489,
0.3996576965,
0.0113185151,
0.0607534386,
0.2039569914,
0.0111171249,
0.0105741965,
-0.1794577241,
0.1295818537,
-0.2606135011,
0.3045917153,
-0.026507305,
-0.3158572614,
0.148051098,
0.0406328477,
-0.2192370594,
-0.0878... |
https://github.com/huggingface/datasets/issues/3738 | For data-only datasets, streaming and non-streaming don't behave the same | We should at least raise an error if a new sample has column names that are missing, or if it has extra columns. No need to check for the type for now.
I'm in favor of having an error especially because we want to avoid silent issues as much as possible - i.e. when something goes wrong (when schemas don't match or s... | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | 79 | For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets... | [
-0.1809254587,
-0.5316242576,
0.031650912,
0.0601784512,
0.326351881,
0.0084529314,
0.1868888438,
0.2202477306,
0.0926468,
0.2087871581,
-0.1634202749,
0.0506505333,
-0.2015401125,
0.24568443,
0.0024848415,
-0.3584954441,
0.1400906146,
0.1418411583,
-0.2897823453,
-0.1043147594... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.