html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k β | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/4000 | load_dataset error: sndfile library not found | Hi @i-am-neo,
The audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:
```shell
pip install datasets[audio]
```
Additionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third... | ## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71... | 61 | load_dataset error: sndfile library not found
## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and prepa... | [
-0.1790904552,
-0.2704021335,
-0.0430675745,
0.2871549726,
0.1397019923,
-0.1179440096,
0.3523590863,
0.0170880817,
0.0107310023,
0.38430655,
-0.1333328933,
0.2134562433,
0.047165595,
0.4075194895,
0.0083271693,
0.1030785218,
0.0407495089,
0.2446232438,
0.3289997578,
0.00496981... |
https://github.com/huggingface/datasets/issues/4000 | load_dataset error: sndfile library not found | Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?
```
pip3 install datasets[audio]
```
### log
Requirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)
Requirement already sati... | ## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71... | 969 | load_dataset error: sndfile library not found
## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and prepa... | [
-0.1790904552,
-0.2704021335,
-0.0430675745,
0.2871549726,
0.1397019923,
-0.1179440096,
0.3523590863,
0.0170880817,
0.0107310023,
0.38430655,
-0.1333328933,
0.2134562433,
0.047165595,
0.4075194895,
0.0083271693,
0.1030785218,
0.0407495089,
0.2446232438,
0.3289997578,
0.00496981... |
https://github.com/huggingface/datasets/issues/4000 | load_dataset error: sndfile library not found | Hi @i-am-neo, thanks again for your detailed report.
Our `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:
```shell
pip install datasets[audio]
```
However, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends... | ## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71... | 95 | load_dataset error: sndfile library not found
## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and prepa... | [
-0.1790904552,
-0.2704021335,
-0.0430675745,
0.2871549726,
0.1397019923,
-0.1179440096,
0.3523590863,
0.0170880817,
0.0107310023,
0.38430655,
-0.1333328933,
0.2134562433,
0.047165595,
0.4075194895,
0.0083271693,
0.1030785218,
0.0407495089,
0.2446232438,
0.3289997578,
0.00496981... |
https://github.com/huggingface/datasets/issues/4000 | load_dataset error: sndfile library not found | @albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously. | ## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71... | 24 | load_dataset error: sndfile library not found
## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and prepa... | [
-0.1790904552,
-0.2704021335,
-0.0430675745,
0.2871549726,
0.1397019923,
-0.1179440096,
0.3523590863,
0.0170880817,
0.0107310023,
0.38430655,
-0.1333328933,
0.2134562433,
0.047165595,
0.4075194895,
0.0083271693,
0.1030785218,
0.0407495089,
0.2446232438,
0.3289997578,
0.00496981... |
https://github.com/huggingface/datasets/issues/3996 | Audio.encode_example() throws an error when writing example from array | Thanks @polinaeterna for reporting this issue.
In relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio. But yes, nice to give an alternative to non-torchaudio users (wi... | ## Describe the bug
When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error:
`TypeError: No format specified and unable to get format from file extension: <_io.BytesI... | 57 | Audio.encode_example() throws an error when writing example from array
## Describe the bug
When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error:
`TypeError: No f... | [
0.1218088493,
-0.3153703213,
-0.0251268037,
0.2208195925,
0.4910096526,
-0.1269059926,
0.362246424,
0.0690375343,
-0.0697961226,
0.4098410308,
-0.1219432727,
0.2596229613,
-0.1840140373,
0.1837579608,
0.0004699291,
-0.1854689717,
0.1399264932,
0.3047871888,
-0.0332123414,
0.012... |
https://github.com/huggingface/datasets/issues/3996 | Audio.encode_example() throws an error when writing example from array | > I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio.
Yeah, I know, but as far as I understand, some users just categorically don't want to have torchaudio in their environment. Anyway, it's just a more or less random example, th... | ## Describe the bug
When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error:
`TypeError: No format specified and unable to get format from file extension: <_io.BytesI... | 93 | Audio.encode_example() throws an error when writing example from array
## Describe the bug
When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error:
`TypeError: No f... | [
0.1218088493,
-0.3153703213,
-0.0251268037,
0.2208195925,
0.4910096526,
-0.1269059926,
0.362246424,
0.0690375343,
-0.0697961226,
0.4098410308,
-0.1219432727,
0.2596229613,
-0.1840140373,
0.1837579608,
0.0004699291,
-0.1854689717,
0.1399264932,
0.3047871888,
-0.0332123414,
0.012... |
https://github.com/huggingface/datasets/issues/3993 | Streaming dataset + interleave + DataLoader hangs with multiple workers | Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?
Currently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) | ## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import interleave_datasets, load_dataset
from torch.utils.data import DataLoader
... | 33 | Streaming dataset + interleave + DataLoader hangs with multiple workers
## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import int... | [
-0.3555943668,
-0.2197351754,
-0.1446901411,
0.2996629775,
0.1088644564,
-0.0536144264,
0.6818822026,
0.1681309193,
0.1518095136,
0.288113445,
-0.0748729855,
0.2441164255,
-0.035196729,
-0.214463532,
-0.2273011804,
-0.1399882138,
0.0366156325,
-0.0317587443,
-0.0748977289,
0.06... |
https://github.com/huggingface/datasets/issues/3992 | Image column is not decoded in map when using with with_transform | Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919
Basically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with_transf... | ## Describe the bug
Image column is not _decoded_ in **map** when using with `with_transform`
## Steps to reproduce the bug
```python
from datasets import Image, Dataset
def add_C(batch):
batch["C"] = batch["A"]
return batch
ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image())
... | 45 | Image column is not decoded in map when using with with_transform
## Describe the bug
Image column is not _decoded_ in **map** when using with `with_transform`
## Steps to reproduce the bug
```python
from datasets import Image, Dataset
def add_C(batch):
batch["C"] = batch["A"]
return batch
ds = ... | [
-0.1467385888,
-0.1622456312,
0.0413527489,
0.3056761324,
0.6214095354,
0.1209302396,
0.1893106997,
0.25581792,
0.3303367198,
0.0117178382,
-0.2401601374,
0.765168786,
-0.0390638933,
-0.1923353076,
-0.1648159325,
-0.2066480368,
0.2259844393,
0.1344407201,
-0.3847177625,
0.06404... |
https://github.com/huggingface/datasets/issues/3990 | Improve AutomaticSpeechRecognition task template | There is an open PR to do that: #3364. I just haven't had time to finish it... | **Is your feature request related to a problem? Please describe.**
[AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it... | 17 | Improve AutomaticSpeechRecognition task template
**Is your feature request related to a problem? Please describe.**
[AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio c... | [
-0.2149880379,
-0.346783191,
-0.0174444132,
-0.1044970453,
0.2359564155,
-0.2255588621,
0.3123865426,
0.1822747886,
0.0016585005,
0.285658747,
-0.0709596574,
0.3695853651,
-0.198413536,
0.1915427595,
-0.0616692901,
-0.329256624,
-0.0029951178,
0.0826634914,
0.1883462071,
-0.065... |
https://github.com/huggingface/datasets/issues/3990 | Improve AutomaticSpeechRecognition task template | > There is an open PR to do that: #3364. I just haven't had time to finish it...
π¬ thanks... | **Is your feature request related to a problem? Please describe.**
[AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it... | 20 | Improve AutomaticSpeechRecognition task template
**Is your feature request related to a problem? Please describe.**
[AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio c... | [
-0.1985286325,
-0.3698556721,
-0.0073386733,
-0.0866735801,
0.2447382063,
-0.2159384638,
0.2979326546,
0.1808808893,
0.0078332815,
0.2391605675,
-0.063534148,
0.3760056198,
-0.1943418533,
0.1996388584,
-0.0680930093,
-0.3678844273,
0.0123951416,
0.0835121423,
0.1972334236,
-0.0... |
https://github.com/huggingface/datasets/issues/3986 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface) | Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ? | ## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear ... | 30 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
**... | [
-0.130522728,
-0.0170790944,
0.0759793818,
0.1971009821,
0.2393505275,
0.1249808967,
0.3519514203,
0.0293050352,
0.3356306851,
-0.1589412391,
-0.2429889143,
0.2090013325,
-0.1376602203,
-0.185026139,
0.0645398125,
0.1283339411,
-0.0886049792,
0.0668050274,
-0.5255346298,
0.1019... |
https://github.com/huggingface/datasets/issues/3986 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface) | Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem.
However in other cases such as mine, ... | ## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear ... | 85 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
**... | [
-0.130522728,
-0.0170790944,
0.0759793818,
0.1971009821,
0.2393505275,
0.1249808967,
0.3519514203,
0.0293050352,
0.3356306851,
-0.1589412391,
-0.2429889143,
0.2090013325,
-0.1376602203,
-0.185026139,
0.0645398125,
0.1283339411,
-0.0886049792,
0.0668050274,
-0.5255346298,
0.1019... |
https://github.com/huggingface/datasets/issues/3984 | Local and automatic tests fail | Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with
```
pip install -e .[tests]
pip install -r additional-tests-requirements.txt --no-deps
```
In particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacrebleu.TER` properly... | ## Describe the bug
Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py`
## Steps to reproduce the bug
```shell
git clone https://huggingface/datasets.git
cd datasets
```
```python
python -m pip install -e .
pytest
```
## Expected... | 49 | Local and automatic tests fail
## Describe the bug
Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py`
## Steps to reproduce the bug
```shell
git clone https://huggingface/datasets.git
cd datasets
```
```python
python -m pip install ... | [
-0.1117824614,
-0.2659797966,
-0.0161101297,
0.1842330545,
0.1371133029,
-0.056852825,
0.1238805279,
0.3405486345,
0.1495894492,
0.3421726823,
0.0462886617,
0.307631582,
-0.166297853,
0.2298704684,
-0.0389916226,
-0.2192201465,
-0.1309422553,
0.0968284383,
0.1460393518,
0.02780... |
https://github.com/huggingface/datasets/issues/3983 | Infinitely attempting lock | Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.
Can you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.
If it doesn't work, could you try to set up a lock using the latest version of `py-fi... | I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`.
Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS).
```
%sh
python /dbfs/transformers/examples/pytorch/summarization/run_summariz... | 102 | Infinitely attempting lock
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`.
Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS).
```
%sh
python /dbfs/transformers/examples/pytor... | [
-0.3298414648,
0.2049618959,
-0.0227508154,
0.1526561677,
0.3330144286,
0.0533309989,
0.4307444394,
0.2333425879,
0.1402675211,
0.0099878293,
-0.037053667,
0.0426321141,
-0.1039026752,
-0.3476848602,
-0.1842166781,
0.1177707762,
-0.0877683386,
-0.0198222101,
-0.2799401283,
0.06... |
https://github.com/huggingface/datasets/issues/3978 | I can't view HFcallback dataset for ASR Space | the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?
maybe @lhoestq or @albertvillanova could help
<img width="1019" alt="Capture dβeΜcran 2022-03-24 aΜ 17 36 20" src="https://user-i... | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
A... | 51 | I can't view HFcallback dataset for ASR Space
## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thou... | [
-0.2916873693,
-0.142144084,
-0.0383737311,
0.3071325421,
0.0016790606,
0.3776570559,
-0.1179486141,
0.2751256526,
-0.383146286,
0.3440394998,
-0.1850825995,
-0.1555973291,
-0.2009932548,
-0.0877864435,
0.1465865225,
0.3381248713,
-0.0852607638,
0.1948905736,
0.1229014471,
-0.1... |
https://github.com/huggingface/datasets/issues/3978 | I can't view HFcallback dataset for ASR Space | The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.
We're working on supporting audio datasets with a specific structure in #3963 | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
A... | 32 | I can't view HFcallback dataset for ASR Space
## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thou... | [
-0.3399884105,
-0.1958577931,
-0.0508293658,
0.281201601,
-0.0315378197,
0.3359980583,
-0.133547619,
0.2814232111,
-0.3438002467,
0.3145847619,
-0.1204231083,
-0.1139252111,
-0.2503626049,
0.0155256502,
0.176128149,
0.3054134846,
-0.107241042,
0.2273607552,
0.1579889953,
-0.095... |
https://github.com/huggingface/datasets/issues/3973 | ConnectionError and SSLError | Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.
Then you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:
```python
load_dataset("path/to/oscar.py", "unshuffled_deduplicated_it")
``` | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_2978... | 32 | ConnectionError and SSLError
code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\A... | [
-0.5248149037,
0.040523652,
-0.1230329275,
0.0725033507,
0.3041601479,
-0.0861737803,
0.379427731,
0.3197905421,
0.165754199,
0.1261253059,
-0.0368026756,
0.1544637829,
0.029569976,
0.2415486127,
-0.0804178566,
0.0118006617,
0.0215492398,
0.2121124864,
-0.2936368585,
0.14344470... |
https://github.com/huggingface/datasets/issues/3973 | ConnectionError and SSLError | it works,but another error occurs.
```
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError("HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/... | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_2978... | 49 | ConnectionError and SSLError
code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\A... | [
-0.5248149037,
0.040523652,
-0.1230329275,
0.0725033507,
0.3041601479,
-0.0861737803,
0.379427731,
0.3197905421,
0.165754199,
0.1261253059,
-0.0368026756,
0.1544637829,
0.029569976,
0.2415486127,
-0.0804178566,
0.0118006617,
0.0215492398,
0.2121124864,
-0.2936368585,
0.14344470... |
https://github.com/huggingface/datasets/issues/3973 | ConnectionError and SSLError | you are so wise!
it report [ConnectionError] in python 3.9.7
and works well in python 3.8.12
I need you help again: how can I specify the path for download files?
the data is too large and my C hardware is not enough | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_2978... | 42 | ConnectionError and SSLError
code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\A... | [
-0.5248149037,
0.040523652,
-0.1230329275,
0.0725033507,
0.3041601479,
-0.0861737803,
0.379427731,
0.3197905421,
0.165754199,
0.1261253059,
-0.0368026756,
0.1544637829,
0.029569976,
0.2415486127,
-0.0804178566,
0.0118006617,
0.0215492398,
0.2121124864,
-0.2936368585,
0.14344470... |
https://github.com/huggingface/datasets/issues/3973 | ConnectionError and SSLError | Cool ! And you can specify the path for download files with to the `cache_dir` parameter:
```python
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory') | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_2978... | 26 | ConnectionError and SSLError
code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\A... | [
-0.5248149037,
0.040523652,
-0.1230329275,
0.0725033507,
0.3041601479,
-0.0861737803,
0.379427731,
0.3197905421,
0.165754199,
0.1261253059,
-0.0368026756,
0.1544637829,
0.029569976,
0.2415486127,
-0.0804178566,
0.0118006617,
0.0215492398,
0.2121124864,
-0.2936368585,
0.14344470... |
https://github.com/huggingface/datasets/issues/3973 | ConnectionError and SSLError | It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.
parameter `cache_dir` works well, thanks for your kindness again! | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_2978... | 33 | ConnectionError and SSLError
code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\A... | [
-0.5248149037,
0.040523652,
-0.1230329275,
0.0725033507,
0.3041601479,
-0.0861737803,
0.379427731,
0.3197905421,
0.165754199,
0.1261253059,
-0.0368026756,
0.1544637829,
0.029569976,
0.2415486127,
-0.0804178566,
0.0118006617,
0.0215492398,
0.2121124864,
-0.2936368585,
0.14344470... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | I guess the cache got corrupted due to a previous issue with Google Drive service.
The cache should be regenerated, e.g. by passing `download_mode="force_redownload"`.
CC: @severo | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 26 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
I guess the cache got corrupted due to a previous issue with Google Drive service.
Th... | [
-0.3001440465,
0.1003714427,
-0.0295106284,
0.2527274191,
-0.0365718156,
0.4788866937,
0.3518635631,
0.1342158616,
-0.1076104343,
0.0816057771,
0.0185797792,
-0.0681073889,
-0.1297191381,
0.1330211014,
-0.0545894839,
-0.0531768836,
0.0362982228,
0.0126729505,
0.0645381659,
0.15... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode="force_redownload"` doesn't help. But yes indeed the cache must be refreshed.
The CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found anothe... | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 81 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mo... | [
-0.2877512872,
-0.0782131031,
0.0257026702,
0.1993884295,
-0.0514075831,
0.4068708718,
0.2187194377,
0.0937290639,
-0.0140247401,
-0.0956277996,
-0.0778447166,
0.0096288519,
-0.1578942686,
0.100140661,
-0.0530549847,
0.1121212021,
0.0501070432,
-0.0000497789,
-0.0694659948,
0.0... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 16 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Dri... | [
-0.249042511,
-0.0306454003,
-0.0246337336,
0.2104780227,
-0.0485685319,
0.3847599626,
0.3927805424,
0.0898776203,
-0.0480347797,
0.1583120227,
0.0144589236,
-0.0029105369,
-0.2134843469,
0.2687585354,
0.0569703877,
0.0509313904,
0.1048442572,
-0.0149675589,
0.008058981,
0.0969... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Sounds good. I was looking for another host of this dataset but couldn't find any (yet) | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 16 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Sounds good. I was looking for another host of this dataset but couldn't find any (yet) | [
-0.2171033621,
-0.217253387,
0.0266657099,
0.2140307724,
-0.1601771414,
0.3883537948,
0.3702820241,
-0.0194980074,
-0.0784727484,
0.0787744746,
-0.0825128406,
0.0275870785,
-0.2946317196,
0.0952442884,
0.1294853985,
0.0816146955,
0.1012256593,
-0.03399368,
0.0840781406,
-0.0007... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | It seems like the issue is with the streaming mode, not with the hosting:
```python
>>> import datasets
>>> dataset = datasets.load_dataset('cnn_dailymail', name="3.0.0", split="train", streaming=True, download_mode="force_redownload")
Downloading builder script: 9.35kB [00:00, 10.2MB/s]
Downloading metadata: 9.... | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 101 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
It seems like the issue is with the streaming mode, not with the hosting:
```python
... | [
-0.2431964278,
-0.1357468218,
0.0265148021,
0.1462695599,
0.1896486133,
0.2344716638,
0.2769337893,
0.1717694551,
-0.1825178415,
0.0452863052,
-0.1151526719,
-0.0208409596,
-0.2906262577,
0.1923888624,
0.017642051,
0.0711764842,
0.0342033543,
0.0241453182,
-0.0968555585,
0.0815... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 21 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Well this is because the host (Google Drive) returns a document that is not the actual d... | [
-0.2201514542,
-0.1176844314,
0.008099528,
0.2750419974,
-0.1578515172,
0.4535071552,
0.2774329185,
0.102217637,
-0.0483689159,
0.1661637127,
-0.0356001481,
-0.052585002,
-0.1807608753,
0.1826712191,
0.1238147318,
-0.0059419046,
0.1332160234,
0.0092483321,
0.052741725,
-0.00808... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Yes it definitely should ! I don't have the bandwidth to work on this right now though | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 17 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Yes it definitely should ! I don't have the bandwidth to work on this right now though | [
-0.2430776656,
-0.0320056789,
-0.0319474787,
0.2054729015,
-0.0779783353,
0.3501709104,
0.3647361994,
0.0361054949,
-0.1428976059,
0.0710142553,
0.0454365984,
0.1317964494,
-0.2172238827,
0.3043098748,
0.112982817,
-0.0036861496,
0.0916846693,
-0.0354320481,
-0.0544379242,
0.05... |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Indeed, streaming was not supported: tgz archives were not properly iterated.
I've opened a PR to support streaming.
However, keep in mind that Google Drive will keep generating issues from time to time, like 403,... | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 35 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Indeed, streaming was not supported: tgz archives were not properly iterated.
I've op... | [
-0.3138817549,
-0.0006062295,
0.0798434764,
0.1281982213,
-0.0945748463,
0.4155945778,
0.3022679687,
0.0399726294,
-0.2884829044,
0.1047268808,
0.0059947176,
-0.130375728,
-0.2775120139,
0.3053117692,
-0.0797778219,
-0.0123270582,
0.0110680507,
0.0108387722,
0.1293762028,
0.119... |
https://github.com/huggingface/datasets/issues/3968 | Cannot preview 'indonesian-nlp/eli5_id' dataset | Hi @cahya-wirawan, thanks for reporting.
Your dataset is working OK in streaming mode:
```python
In [1]: from datasets import load_dataset
...: ds = load_dataset("indonesian-nlp/eli5_id", split="train", streaming=True)
...: item = next(iter(ds))
...: item
Using custom data configuration indonesian-nlp... | ## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exis... | 271 | Cannot preview 'indonesian-nlp/eli5_id' dataset
## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the ca... | [
-0.3884465694,
-0.1418881714,
-0.0735412166,
0.1538680792,
-0.0247689895,
0.2311387956,
0.0929407179,
0.5587151647,
0.013209342,
-0.0031359459,
-0.2292772233,
0.1413211524,
-0.0089979749,
0.0560638495,
0.2455597371,
-0.3852819204,
0.1090712398,
0.2116558552,
0.066169925,
0.2062... |
https://github.com/huggingface/datasets/issues/3968 | Cannot preview 'indonesian-nlp/eli5_id' dataset | Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work? | ## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exis... | 31 | Cannot preview 'indonesian-nlp/eli5_id' dataset
## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the ca... | [
-0.3352318704,
-0.1239318326,
-0.0271362271,
0.2779891193,
-0.1499149054,
0.2525175512,
0.1689608395,
0.4171839654,
0.0262223538,
0.1278894246,
-0.2093314379,
0.0273423586,
0.0728632361,
-0.0633365959,
0.2337571234,
-0.2524936795,
0.1370038688,
0.2076776922,
0.0418996774,
0.139... |
https://github.com/huggingface/datasets/issues/3965 | TypeError: Couldn't cast array of type for JSONLines dataset | Hi @lewtun, thanks for reporting.
It seems that our library fails at inferring the dtype of the columns:
- `milestone`
- `performed_via_github_app`
(and assigns them `null` dtype). | ## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This reminds me a bit of #2799 where one can load the dataset in `pan... | 27 | TypeError: Couldn't cast array of type for JSONLines dataset
## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This r... | [
0.0036057804,
-0.0479999408,
-0.0535868518,
0.4342817962,
0.4592534006,
0.1503396481,
0.4873130918,
0.3241954744,
0.4770776033,
-0.0575013347,
-0.061183814,
0.1311788857,
-0.1100014076,
0.2816880941,
-0.0008709847,
-0.3941473365,
-0.0146889286,
-0.0445323624,
0.1062436849,
0.17... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:
```python
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
>>> ds = load_dataset('imagefolder', data... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 40 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:
>
> ```python
> >>> from datasets import load_dataset
> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
> >>> ds = load_dataset('imag... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 378 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 21 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.
ThanksοΌIt's worked well. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 25 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.
I find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.
First loading, it costs about 20 min in my servers.
```
real 19m23.023s
use... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 83 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
```python
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example i... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 45 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Loading the image files slowly, is it because the multiple processes load files at the same time? | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 17 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.
> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/exa... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 125 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.
>
> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 269 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Thanks for rerunning the code to record the output. Is it the `"Resolving data files"` part on your machine that takes a long time to complete, or is it `"Loading cached processed dataset at ..."Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big,... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 64 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > Thanks for rerunning the code to record the output. Is it the `"Resolving data files"` part on your machine that takes a long time to complete, or is it `"Loading cached processed dataset at ..."Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that bi... | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder... | 90 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_da... | [
-0.3380221426,
-0.114140518,
-0.0211975258,
0.2390869856,
0.4253338575,
-0.0225406811,
0.1771001816,
0.3576569557,
0.0669663846,
0.1895558685,
-0.1314771473,
0.1565935761,
-0.0968024284,
0.1201882288,
-0.0979677886,
-0.2962186337,
-0.0579133257,
0.1971372515,
-0.1178744361,
-0.... |
https://github.com/huggingface/datasets/issues/3959 | Medium-sized dataset conversion from pandas causes a crash | Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ? | Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools... | 18 | Medium-sized dataset conversion from pandas causes a crash
Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=da... | [
-0.3885252178,
0.055236578,
0.0980944559,
0.4302076697,
0.5165883303,
0.0892649516,
0.2436548024,
0.3483604193,
-0.1818013042,
0.0568672642,
0.0694642514,
0.4911386073,
-0.0399589539,
-0.0660407469,
-0.0872594267,
-0.3226374686,
0.2678995132,
0.0140860081,
-0.1913530678,
0.0735... |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | Hi @amirj, thanks for reporting.
At first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.
Feel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility
... | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El... | 66 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th... | [
-0.2016848028,
-0.3632028997,
-0.0670635998,
-0.1154991388,
-0.0156478528,
0.2479979247,
0.2341080308,
0.2877423763,
0.0638613626,
0.0936896205,
-0.0358452164,
0.3917693198,
-0.0215825103,
-0.1523345411,
0.1281494349,
-0.230913952,
0.1210855693,
0.0874325931,
0.1187864989,
-0.0... |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | @albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:
```
from elasticsearch import Elasticsearch
es_client = Elasticsearch("http://localhost:9200")
dataset.add_elasticsearch_index(column="e1", es_client=es_client, es_index_name="e1_index")
... | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El... | 30 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th... | [
-0.2016848028,
-0.3632028997,
-0.0670635998,
-0.1154991388,
-0.0156478528,
0.2479979247,
0.2341080308,
0.2877423763,
0.0638613626,
0.0936896205,
-0.0358452164,
0.3917693198,
-0.0215825103,
-0.1523345411,
0.1281494349,
-0.230913952,
0.1210855693,
0.0874325931,
0.1187864989,
-0.0... |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | Hi @amirj,
I really think it is a version incompatibility issue between your Elasticsearch client and server:
- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'
- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`
Moreover:
- Looking at yo... | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El... | 125 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th... | [
-0.2016848028,
-0.3632028997,
-0.0670635998,
-0.1154991388,
-0.0156478528,
0.2479979247,
0.2341080308,
0.2877423763,
0.0638613626,
0.0936896205,
-0.0358452164,
0.3917693198,
-0.0215825103,
-0.1523345411,
0.1281494349,
-0.230913952,
0.1210855693,
0.0874325931,
0.1187864989,
-0.0... |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | ```
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
```
```
TypeError Traceback (most recent call last)
<ipython-input-8-675c6ffe5293> in <module>
1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])
2 from elast... | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El... | 179 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th... | [
-0.2016848028,
-0.3632028997,
-0.0670635998,
-0.1154991388,
-0.0156478528,
0.2479979247,
0.2341080308,
0.2877423763,
0.0638613626,
0.0936896205,
-0.0358452164,
0.3917693198,
-0.0215825103,
-0.1523345411,
0.1281494349,
-0.230913952,
0.1210855693,
0.0874325931,
0.1187864989,
-0.0... |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | @raj713335, thanks for reporting.
Please note that in your code example, you are not using our `datasets` library.
Thus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py
| ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El... | 30 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th... | [
-0.2016848028,
-0.3632028997,
-0.0670635998,
-0.1154991388,
-0.0156478528,
0.2479979247,
0.2341080308,
0.2877423763,
0.0638613626,
0.0936896205,
-0.0358452164,
0.3917693198,
-0.0215825103,
-0.1523345411,
0.1281494349,
-0.230913952,
0.1210855693,
0.0874325931,
0.1187864989,
-0.0... |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | Hi @MatanBenChorin, thanks for reporting.
Please, take into account that the preview may take some time until it properly renders (we are working to reduce this time).
Maybe @severo can give more details on this. | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 35 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
... | [
-0.311417073,
-0.4014884233,
-0.0658437833,
0.2669587731,
0.0838583782,
0.111631304,
0.2829137444,
0.2667754292,
-0.086287193,
0.3120711148,
-0.159604758,
-0.0146394474,
0.0534691848,
0.1740650982,
0.1613072902,
-0.171547398,
0.035972476,
0.1212246642,
-0.0750992522,
-0.0402280... |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:
```
Server Error
Status code: 400
Exception: NameError
Message: name 'HebrewSquad' is not defined
``` | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 29 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
... | [
-0.2194945514,
-0.1446341723,
-0.073166877,
0.2411449552,
0.1175788715,
0.0581992008,
0.2833602428,
0.245520696,
-0.1701327413,
0.3784975111,
-0.1162924618,
-0.0477871522,
-0.0005150448,
-0.0234557446,
0.0869764686,
-0.0409863181,
0.1532171965,
0.1132068858,
-0.0374043807,
0.08... |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)
```python
>>> import datasets as ds
>>> hf_token = "hf_..." # <- required because the dataset is gated
>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)
...
NameError: nam... | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 49 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
... | [
-0.4161965251,
-0.1798671931,
-0.0500068367,
0.2247761935,
0.1178972796,
0.1464484036,
0.5444728732,
0.274169147,
0.0705037415,
0.3551025093,
-0.1659320891,
-0.046967227,
-0.0536280423,
0.0518665612,
0.0828287229,
-0.0479781404,
0.1451770365,
0.1372269988,
-0.0814927071,
0.0857... |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)
Here is the fix @MatanBenChorin :
```diff
- HebrewSquad(
+ HebrewSquadConfig(
``` | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 20 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
... | [
-0.4104549289,
-0.3167884648,
-0.0494650342,
0.2509036064,
0.2194038033,
0.1013826877,
0.2151305676,
0.2923053205,
-0.1613927037,
0.3173691928,
-0.1991312355,
-0.0747280121,
0.0283092465,
0.0332928114,
-0.0106849736,
-0.0282152146,
0.0587945022,
0.1388985813,
-0.0157679282,
0.0... |
https://github.com/huggingface/datasets/issues/3952 | Checksum error for glue sst2, stsb, rte etc datasets | Hi, @ravindra-ut.
I'm sorry but I can't reproduce your problem:
```python
In [1]: from datasets import load_dataset
In [2]: ds = load_dataset("glue", "sst2")
Downloading builder script: 28.8kB [00:00, 11.6MB/s] ... | ## Describe the bug
Checksum error for glue sst2, stsb, rte etc datasets
## Steps to reproduce the bug
```python
>>> nlp.load_dataset('glue', 'sst2')
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to
Downloading: 100%|ββββββββ... | 179 | Checksum error for glue sst2, stsb, rte etc datasets
## Describe the bug
Checksum error for glue sst2, stsb, rte etc datasets
## Steps to reproduce the bug
```python
>>> nlp.load_dataset('glue', 'sst2')
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknow... | [
-0.0671884269,
0.1787162572,
-0.0147207007,
0.2622029185,
0.2177488953,
-0.1340348721,
0.1646665037,
0.3809954524,
0.2130874395,
-0.0059030103,
-0.0861569941,
0.0618883483,
0.0476297215,
0.2141877562,
0.0418718122,
-0.0405455232,
-0.0152418138,
-0.0524373092,
-0.2798446715,
0.1... |
https://github.com/huggingface/datasets/issues/3951 | Forked streaming datasets try to `open` data urls rather than use network | Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.
In this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to ... | ## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
```python
from multiprocessing import freeze_support
import transformer... | 95 | Forked streaming datasets try to `open` data urls rather than use network
## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
... | [
-0.2870965898,
-0.5631282926,
0.0720065162,
0.2570628524,
0.0303519052,
-0.1619096547,
0.2698484361,
0.1394547522,
-0.2965719104,
0.1078559384,
-0.1819486618,
0.3537836969,
-0.1617898494,
-0.164617002,
-0.0302211326,
-0.2005798072,
-0.1129440665,
-0.3893051744,
-0.0884459093,
0... |
https://github.com/huggingface/datasets/issues/3950 | Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1 | Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too
We should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)
I'm also taking a look ... | ## Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
## Steps to reproduce the bug
```python
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('os... | 55 | Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
## Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
## Steps to reproduce the bug
```python
import transformers
from transformers import Trainer, Au... | [
-0.4724807739,
-0.3094196022,
0.0654424131,
0.358529985,
0.2515180707,
-0.0702164546,
0.3671468794,
0.1715020239,
0.1068321839,
0.2775303721,
0.0359959863,
0.1637874395,
-0.5519855618,
0.046610076,
-0.0808157176,
-0.3613413274,
0.0512008332,
-0.0016889275,
-0.1369641274,
0.1102... |
https://github.com/huggingface/datasets/issues/3942 | reddit_tifu dataset: Checksums didn't match for dataset source files | Hi @XingxingZhang,
We have already fixed this. You should update `datasets` version to at least 1.18.4:
```shell
pip install -U datasets
```
And then force the redownload:
```python
load_dataset("...", download_mode="force_redownload")
```
Duplicate of:
- #3773 | ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu'... | 35 | reddit_tifu dataset: Checksums didn't match for dataset source files
## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.... | [
-0.3985462189,
0.1268263757,
-0.0900790319,
0.1531643271,
0.4119483531,
-0.0722520053,
0.1358962655,
0.4609467089,
-0.0700394362,
-0.0869091526,
-0.1713992208,
0.4725554585,
0.2960699201,
0.2216589451,
0.0006657672,
0.1481342018,
0.1132510602,
0.123813048,
-0.2147900909,
-0.103... |
https://github.com/huggingface/datasets/issues/3942 | reddit_tifu dataset: Checksums didn't match for dataset source files | thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset("...", download_mode="force_redownload")` fixed
the bug.
using the following as you suggested in another thread can also fixed the bug
```
pip install git+https://github.com/huggingface/datasets#egg=datasets
```
| ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu'... | 33 | reddit_tifu dataset: Checksums didn't match for dataset source files
## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.... | [
-0.42410779,
0.1216537654,
-0.0851321667,
0.1159741431,
0.4104367197,
-0.1015880704,
0.1616488397,
0.4821438193,
-0.0676112175,
-0.083158277,
-0.1715805084,
0.4847015142,
0.2961728573,
0.2609198093,
-0.0052189589,
0.1425726861,
0.0813889354,
0.0942178369,
-0.1835359782,
-0.1239... |
https://github.com/huggingface/datasets/issues/3942 | reddit_tifu dataset: Checksums didn't match for dataset source files | The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.
You can now install from PyPI, as usual:
```shell
pip install -U datasets
```
| ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu'... | 49 | reddit_tifu dataset: Checksums didn't match for dataset source files
## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.... | [
-0.3972456455,
0.1158496067,
-0.0944008157,
0.0743378922,
0.374297291,
-0.1016562209,
0.1341255009,
0.50539428,
-0.1505983323,
-0.0394883007,
-0.1530265063,
0.4913251102,
0.306778878,
0.2183291018,
-0.0091936588,
0.1344510317,
0.1148079932,
0.1060604081,
-0.1334546655,
-0.05729... |
https://github.com/huggingface/datasets/issues/3941 | billsum dataset: Checksums didn't match for dataset source files: | Hi @XingxingZhang, thanks for reporting.
This was due to a change in Google Drive service:
- #3786
We have already fixed it:
- #3787
You should update `datasets` version to at least 1.18.4:
```shell
pip install -U datasets
```
And then force the redownload:
```python
load_dataset("...", download_mode=... | ## Describe the bug
When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files"
```
File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_u... | 48 | billsum dataset: Checksums didn't match for dataset source files:
## Describe the bug
When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files"
```
File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify... | [
-0.4439739883,
0.3979526758,
-0.1034803316,
0.1886253357,
0.2277507037,
0.0679107383,
0.2159261554,
0.2985672951,
0.1642131805,
0.1222989783,
0.0958597139,
0.0360738449,
0.1474190056,
0.2202241421,
-0.0004790273,
0.177003637,
0.0542054698,
-0.0288488287,
-0.0223006923,
-0.03526... |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | Thanks for reporting @qqaatw.
@mishig25 @sgugger do you think this can be tweaked in the new doc framework?
- From: https://github.com/huggingface/datasets/blob/v2.0.0/
- To: https://github.com/huggingface/datasets/blob/2.0.0/ | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features... | 24 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0... | [
-0.0187268835,
-0.0063733449,
0.0506374426,
-0.0155852269,
0.1034536809,
0.0018571449,
0.0943081975,
0.4163028896,
-0.3947233856,
-0.0836842582,
-0.0372577347,
0.3179531395,
-0.1664931178,
0.2242848426,
0.3080801368,
-0.0525296144,
0.0487335697,
0.2238049805,
-0.2547570765,
-0.... |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | @qqaatw thanks a lot for notifying about this issue!
in comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).
Therefore, we have to do one of 2 options below:
1. Make necessary changes on doc-... | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features... | 64 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0... | [
0.0065730982,
-0.0883272514,
0.0552236997,
-0.0077055711,
0.2573395669,
-0.0714105517,
0.1945634633,
0.3197168112,
-0.3424007595,
-0.1349500567,
-0.1559810191,
0.2802294493,
-0.0560551584,
0.1167269498,
0.2147251964,
-0.2155793309,
0.0649411231,
0.2920233309,
-0.252859056,
-0.1... |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-) | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features... | 45 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0... | [
-0.0938097313,
-0.0260814503,
0.0604206808,
-0.0434884988,
0.1273476481,
0.0210941676,
0.0481097512,
0.4152699113,
-0.2235424668,
-0.0596477427,
-0.1113163456,
0.2980242074,
-0.0862368792,
0.2317366898,
0.230434373,
-0.0245843213,
0.0244480316,
0.2727003694,
-0.2373473942,
-0.1... |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | For me it is OK to conform to the rest of libraries and tag/release with a preceding "v", rather than adding an extra argument to the doc builder just for `datasets`.
Let me know if it is also OK for you @lhoestq. | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features... | 42 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0... | [
-0.0685728416,
-0.0409824587,
0.0370725356,
0.0277262218,
0.1149328724,
-0.0037942776,
0.0967352986,
0.3829962015,
-0.2840083241,
-0.089750126,
0.0138481259,
0.4365793169,
-0.169669956,
0.20860897,
0.2725096643,
-0.0154741993,
0.0271024127,
0.261585623,
-0.2352999151,
-0.055762... |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.gi... | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features... | 111 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0... | [
-0.0542220399,
0.0136880381,
0.0050492631,
-0.0114311473,
0.1155414954,
-0.0269933287,
0.0938591138,
0.4597062767,
-0.3011184633,
-0.0428775176,
0.0093450099,
0.3782211244,
-0.1852709353,
0.1949326545,
0.1850502491,
0.0363589078,
-0.0211607851,
0.2246724367,
-0.1446908563,
-0.0... |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).
Note that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new br... | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features... | 66 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0... | [
-0.1071432009,
-0.0372906923,
0.0477760844,
0.0057619638,
0.11596407,
0.003080958,
0.1252291948,
0.4230196774,
-0.3148473799,
-0.0263503212,
0.0007161488,
0.3678087294,
-0.098670736,
0.256093055,
0.2104718983,
-0.0194894876,
-0.0013512634,
0.2469568551,
-0.2531590164,
-0.032865... |
https://github.com/huggingface/datasets/issues/3937 | Missing languages in lvwerra/github-code dataset | That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.
Thanks for reporting this @Eytan-S! | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the fut... | 57 | Missing languages in lvwerra/github-code dataset
Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the origina... | [
0.0285869204,
0.0836845189,
-0.2411945611,
0.1160972565,
0.1939319819,
0.0751111731,
0.1134600863,
0.3045022488,
0.2181033641,
-0.0445787832,
0.0106749646,
0.3104630113,
-0.0661668852,
0.1592538059,
0.3111087084,
-0.023108393,
0.1431613564,
-0.0828574225,
-0.0986675397,
-0.3723... |
https://github.com/huggingface/datasets/issues/3937 | Missing languages in lvwerra/github-code dataset | Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:
```Python
{'Assembly': 82847,
'Batchfile': 236755,
'C': 14127969,
'C#': 6793439,
'C++': 7368473,
'CMake': 175076,
'CSS': 1733625,
'Dockerfile': 331966,
'FORTRAN': 141963,
'GO': 2259363,
'... | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the fut... | 82 | Missing languages in lvwerra/github-code dataset
Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the origina... | [
-0.0256797895,
0.1923888624,
-0.2729781866,
0.0706949532,
0.2020146698,
0.0939292908,
0.1608111113,
0.4261849523,
0.1612200588,
0.0164098348,
0.0299240779,
0.1567168534,
-0.1640651971,
0.1871638298,
0.3560539484,
0.0112820817,
0.1720342934,
-0.036380928,
-0.1442355067,
-0.36239... |
https://github.com/huggingface/datasets/issues/3937 | Missing languages in lvwerra/github-code dataset | @Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:
| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.7 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML ... | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the fut... | 292 | Missing languages in lvwerra/github-code dataset
Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the origina... | [
-0.1165022776,
0.1757313758,
-0.2867789268,
0.0446584895,
0.2342687994,
0.069733493,
0.2107684612,
0.4876596332,
0.0990776345,
-0.0302917473,
0.0104146255,
0.1350830644,
-0.1065374613,
0.22048904,
0.3461066186,
0.025111841,
0.1399248093,
-0.0877739266,
-0.1495599896,
-0.3193649... |
https://github.com/huggingface/datasets/issues/3929 | Load a local dataset twice | Hi @caush, thanks for reporting:
In order to load local CSV files, you can use our "csv" loading script: https://huggingface.co/docs/datasets/loading#csv
```python
dataset = load_dataset("csv", data_files=["data/file1.csv", "data/file2.csv"])
```
OR:
```python
dataset = load_dataset("csv", data_dir="data")
``... | ## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected results
Should give something ... | 43 | Load a local dataset twice
## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected re... | [
-0.0047100154,
-0.2396460921,
-0.0901845768,
0.2367399335,
0.1813364476,
-0.0256923791,
0.3377524018,
0.313740164,
0.3259289563,
0.4739158154,
-0.1353584379,
0.0657346621,
0.2889901996,
0.0849127471,
-0.1292005032,
-0.0246547479,
0.1108900756,
0.3750531077,
-0.4979865849,
-0.12... |
https://github.com/huggingface/datasets/issues/3928 | Frugal score deprecations | Hi @Ierezell, thanks for reporting.
I'm making a PR to suppress those logs from the terminal. | ## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinach... | 16 | Frugal score deprecations
## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predi... | [
-0.3378298283,
-0.1535772979,
-0.039447505,
0.1039689332,
0.4437429905,
-0.1888691187,
-0.1592685282,
0.1735536307,
-0.0201473292,
0.2025120109,
-0.0626748055,
0.4346991181,
-0.1748408526,
-0.034357097,
-0.2254187465,
-0.0228458047,
0.0702437311,
0.0544133298,
-0.2488450855,
-0... |
https://github.com/huggingface/datasets/issues/3920 | 'datasets.features' is not a package | Hi @Arij-Aladel,
You are using a very old version of our library `datasets`: 1.8.0
Current version is 2.0.0 (and the previous one was 1.18.4)
Please, try to update `datasets` library and check if the problem persists:
```shell
/env/bin/pip install -U datasets | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.... | 41 | 'datasets.features' is not a package
@albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
``... | [
-0.3741754293,
-0.2259198725,
-0.0957689807,
0.3859054446,
0.3267723024,
0.1350313425,
0.2670208216,
0.2414320111,
-0.3361455202,
-0.1415681392,
0.2684734166,
0.5509799123,
-0.2597140968,
-0.1125493869,
-0.0667067915,
-0.2506271899,
0.191847384,
0.0886706337,
-0.175399527,
0.05... |
https://github.com/huggingface/datasets/issues/3920 | 'datasets.features' is not a package | The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.... | 31 | 'datasets.features' is not a package
@albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
``... | [
-0.3741754293,
-0.2259198725,
-0.0957689807,
0.3859054446,
0.3267723024,
0.1350313425,
0.2670208216,
0.2414320111,
-0.3361455202,
-0.1415681392,
0.2684734166,
0.5509799123,
-0.2597140968,
-0.1125493869,
-0.0667067915,
-0.2506271899,
0.191847384,
0.0886706337,
-0.175399527,
0.05... |
https://github.com/huggingface/datasets/issues/3919 | AttributeError: 'DatasetDict' object has no attribute 'features' | You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`.
For example
```python
ds = load_dataset('mnist')
ds.features
```
Returns
```python
----... | ## Describe the bug
Receiving the error when trying to check for Dataset features
## Steps to reproduce the bug
from datasets import Dataset
dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']])
dataset.features
## Expected results
A clear and concise description of the exp... | 129 | AttributeError: 'DatasetDict' object has no attribute 'features'
## Describe the bug
Receiving the error when trying to check for Dataset features
## Steps to reproduce the bug
from datasets import Dataset
dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']])
dataset.features... | [
-0.0139543787,
-0.0352718271,
-0.1028816327,
0.3369745016,
0.327742517,
0.0226520803,
0.3033012152,
0.4544644952,
0.1717875451,
0.1745668203,
0.069192715,
0.4477998316,
-0.2436327636,
0.1366270185,
-0.1275878996,
-0.2425020486,
-0.0294217896,
0.1705446392,
0.0074410383,
-0.0669... |
https://github.com/huggingface/datasets/issues/3918 | datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files | Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:
```bash
pip install git+https://github.com/huggingface/datasets.git
``` | ## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=load_dataset("reddit_tifu", "long")
## Actual results
raise NonMatchingChecksumError(error_msg + s... | 38 | datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=loa... | [
-0.3251983821,
0.1044453904,
-0.120897755,
0.1940419525,
0.2922667563,
0.0498715974,
0.121732384,
0.3344367743,
0.0943165869,
0.1404138952,
-0.1278532594,
0.2029476762,
0.0496361442,
0.0912993625,
-0.2518990636,
0.2652890086,
0.0160791352,
-0.0456767157,
-0.1795181334,
-0.02387... |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ? | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2... | 30 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from d... | [
-0.2770319879,
-0.5579100847,
0.0958970264,
0.4342247546,
0.0972786099,
0.0279113352,
0.1583617181,
0.3387337327,
0.2213581651,
0.1552248895,
-0.6184939742,
0.183975175,
-0.0529414192,
-0.3478996456,
-0.0592965446,
-0.1135332063,
0.0520580001,
0.1278967559,
0.006890594,
-0.2776... |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
import soundfile as sf
librispeech_eval = load_dataset("librispee... | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2... | 854 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from d... | [
-0.2770319879,
-0.5579100847,
0.0958970264,
0.4342247546,
0.0972786099,
0.0279113352,
0.1583617181,
0.3387337327,
0.2213581651,
0.1552248895,
-0.6184939742,
0.183975175,
-0.0529414192,
-0.3478996456,
-0.0592965446,
-0.1135332063,
0.0520580001,
0.1278967559,
0.006890594,
-0.2776... |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0]["audio"]["array"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)
cc @patrickvonplaten we will need to update the readme at [facebook/s2t-sm... | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2... | 43 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from d... | [
-0.2770319879,
-0.5579100847,
0.0958970264,
0.4342247546,
0.0972786099,
0.0279113352,
0.1583617181,
0.3387337327,
0.2213581651,
0.1552248895,
-0.6184939742,
0.183975175,
-0.0529414192,
-0.3478996456,
-0.0592965446,
-0.1135332063,
0.0520580001,
0.1278967559,
0.006890594,
-0.2776... |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | Thanks!
And sorry for posting this problem in what turned on to be an unrelated thread.
I rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.
The rewritt... | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2... | 130 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from d... | [
-0.2770319879,
-0.5579100847,
0.0958970264,
0.4342247546,
0.0972786099,
0.0279113352,
0.1583617181,
0.3387337327,
0.2213581651,
0.1552248895,
-0.6184939742,
0.183975175,
-0.0529414192,
-0.3478996456,
-0.0592965446,
-0.1135332063,
0.0520580001,
0.1278967559,
0.006890594,
-0.2776... |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for "transcription". You can fix it by adding `[0]` at the end of this line to get the string:
```python
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
``` | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2... | 45 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from d... | [
-0.2770319879,
-0.5579100847,
0.0958970264,
0.4342247546,
0.0972786099,
0.0279113352,
0.1583617181,
0.3387337327,
0.2213581651,
0.1552248895,
-0.6184939742,
0.183975175,
-0.0529414192,
-0.3478996456,
-0.0592965446,
-0.1135332063,
0.0520580001,
0.1278967559,
0.006890594,
-0.2776... |
https://github.com/huggingface/datasets/issues/3906 | NonMatchingChecksumError on Spider dataset | Hi @kolk, thanks for reporting.
Indeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:
- #3787
We just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4
Please, feel free to update your local ... | ## Describe the bug
Failure to generate dataset ```spider``` because of checksums error for dataset source files.
## Steps to reproduce the bug
```
from datasets import load_dataset
spider = load_dataset("spider")
```
## Expected results
Checksums should match for files from url ['https://drive.google.com... | 60 | NonMatchingChecksumError on Spider dataset
## Describe the bug
Failure to generate dataset ```spider``` because of checksums error for dataset source files.
## Steps to reproduce the bug
```
from datasets import load_dataset
spider = load_dataset("spider")
```
## Expected results
Checksums should match... | [
-0.1679656208,
0.0254893191,
-0.0863610655,
0.2954742312,
0.3215496838,
-0.1166106611,
0.1364505142,
0.4342504144,
0.1861151457,
0.2588539124,
-0.2660089433,
0.0177099686,
-0.0366365649,
0.100053817,
-0.1914368868,
0.1708941758,
-0.0017418487,
0.0505828373,
-0.4116188586,
-0.11... |
https://github.com/huggingface/datasets/issues/3904 | CONLL2003 Dataset not available | Thanks for reporting, @omarespejel.
I'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip
Might it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?
Could you... | ## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'

## Steps to reproduce the bug
```python
from datasets impo... | 61 | CONLL2003 Dataset not available
## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'

## Steps to reproduce the ... | [
-0.1911518276,
0.0150689464,
-0.0434647501,
0.3382745683,
0.1922971457,
0.003914088,
0.2465217412,
0.115426749,
-0.3758270442,
0.1485975087,
-0.0861063376,
0.2452334911,
0.4535978734,
0.1852293909,
-0.0381423049,
-0.0447143614,
0.0035965363,
-0.0125611164,
-0.2133074552,
0.0609... |
https://github.com/huggingface/datasets/issues/3902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | Update: `"python3 -c "from from datasets import Dataset, DatasetDict"` works, but not if I import without the `python3 -c` | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8c... | 19 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeE... | [
-0.3171519041,
-0.0977633744,
-0.0604374632,
0.2508158982,
0.5410329103,
0.0515427925,
0.230682373,
0.1784131378,
0.1451837271,
-0.0406590663,
-0.091404371,
0.2070273757,
-0.0849944577,
0.0422014333,
-0.0481128283,
0.0800794661,
0.0392678976,
0.1573756486,
-0.3625041544,
-0.149... |
https://github.com/huggingface/datasets/issues/3902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | Hi @arunasank, thanks for reporting.
It seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from oth... | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8c... | 88 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeE... | [
-0.3171519041,
-0.0977633744,
-0.0604374632,
0.2508158982,
0.5410329103,
0.0515427925,
0.230682373,
0.1784131378,
0.1451837271,
-0.0406590663,
-0.091404371,
0.2070273757,
-0.0849944577,
0.0422014333,
-0.0481128283,
0.0800794661,
0.0392678976,
0.1573756486,
-0.3625041544,
-0.149... |
https://github.com/huggingface/datasets/issues/3901 | Dataset viewer issue for IndicParaphrase- the preview doesn't show | It seems to have been fixed:
<img width="1534" alt="Capture dβeΜcran 2022-04-12 aΜ 14 10 07" src="https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png">
| ## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] ... | 16 | Dataset viewer issue for IndicParaphrase- the preview doesn't show
## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status cod... | [
-0.2362913787,
0.2801050544,
0.0057153134,
0.1762803942,
-0.0407248102,
0.1510794312,
0.224228397,
0.2524903715,
0.0147417122,
0.1067235023,
0.1024638861,
0.1366589069,
-0.0245668236,
-0.212865606,
0.1732797921,
-0.0573359504,
0.0567131527,
0.289804548,
-0.0453777798,
-0.088115... |
https://github.com/huggingface/datasets/issues/3896 | Missing google file for `multi_news` dataset | `datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.
When loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :) | ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the ... | 29 | Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/... | [
-0.2699744105,
-0.0458398797,
0.0497320592,
0.1420216113,
0.1276448667,
0.2481627464,
0.1934229434,
0.2745971382,
-0.0215561334,
0.1579298973,
0.0816367716,
0.0792285651,
-0.3635445237,
0.334953934,
0.1228102967,
-0.2137127519,
0.1218467206,
0.005676188,
0.1303978711,
-0.012960... |
https://github.com/huggingface/datasets/issues/3896 | Missing google file for `multi_news` dataset | That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...
Once merged, that will fix this issue. | ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the ... | 25 | Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/... | [
-0.1872243881,
-0.0048771896,
0.0356899202,
0.1828447878,
0.0608461536,
0.2791557312,
0.2332007587,
0.2045081407,
-0.0366966426,
0.174518615,
0.1645780057,
0.0191535335,
-0.3617122471,
0.2578108907,
0.2078476548,
-0.2329187691,
0.116395615,
0.0118159615,
0.2681151927,
-0.091178... |
https://github.com/huggingface/datasets/issues/3896 | Missing google file for `multi_news` dataset | OK. Should fix the viewer for 50 datasets
<img width="148" alt="Capture dβeΜcran 2022-03-14 aΜ 11 51 02" src="https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png">
| ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the ... | 18 | Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/... | [
-0.193397209,
0.1477288455,
0.0188400652,
0.2966784239,
0.050625924,
0.3109287918,
0.2941399515,
0.1884908676,
-0.0731478259,
0.1240583584,
0.1497876644,
-0.086458005,
-0.3482003808,
0.2359732538,
0.1940668821,
-0.1597624123,
0.0069946707,
0.049904,
0.1567213833,
-0.0928816944,... |
https://github.com/huggingface/datasets/issues/3889 | Cannot load beans dataset (Couldn't reach the dataset) | Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :) | ## Describe the bug
The beans dataset is unavailable to download.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('beans')
```
## Expected results
The dataset would be downloaded with no issue.
## Actual results
```
ConnectionError: Couldn't reach https://s... | 23 | Cannot load beans dataset (Couldn't reach the dataset)
## Describe the bug
The beans dataset is unavailable to download.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('beans')
```
## Expected results
The dataset would be downloaded with no issue.
## Actual ... | [
-0.4721031785,
0.2937533557,
-0.1632620394,
0.4070337117,
0.3040816784,
0.0024882513,
0.1432953626,
0.0070784665,
-0.0481519289,
0.2847832739,
-0.0638592616,
0.3683089912,
-0.0164075736,
0.3610325158,
-0.0237230491,
0.0577073209,
-0.1081281304,
-0.1645554006,
-0.5758636594,
0.0... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Hi @INF800,
Please note that the `imagefolder` feature enhancement was just recently merged to our master branch (https://github.com/huggingface/datasets/commit/207be676bffe9d164740a41a883af6125edef135), but has not yet been released.
We are planning to make the 2.0 release of our library in the coming days and t... | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 140 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Hey @albertvillanova. Does this load entire dataset in memory? Because I am facing huge trouble with loading very big datasets (OOM errors) | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 22 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Can you provide the error stack trace? The loader only stores the `data_files` dict, which can get big after globbing. Then, the OOM error would mean you don't have enough memory to keep all the paths to the image files. You can circumvent this by generating an archive and loading the dataset from there. Maybe we can o... | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 73 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Hey, memory error is resolved. It was fluke.
But there is another issue. Currently `load_dataset("imagefolder", data_dir="./path/to/train",)` takes only `train` as arg to `split` parameter.
I am creating vaildation dataset using
```
ds_valid = datasets.DatasetDict(valid=load_dataset("imagefolder", data_dir=".... | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 36 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | `data_dir="path/to/folder"` is a shorthand syntax fox `data_files={"train": "path/to/folder/**"}`, so use `data_files` in that case instead:
```python
ds = load_dataset("imagefolder", data_files={"train": "path/to/train/**", "test": "path/to/test/**", "valid": "path/to/valid/**"})
``` | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 26 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | And there was another issue. I loaded black and white images (jpeg file). Using load dataset. It reads it as PIL jpeg data format. But instead of converting it into 3 channel tensor, input to collator function is coming as a single channel tensor. | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 44 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | We don't apply any additional preprocessing on top of `PIL.Image.open(image_file)`, so you need to do the conversion yourself:
```python
def to_rgb(batch):
batch["image"] = [img.convert("RGB") for img in batch["image"]]
return batch
ds_rgb = ds.map(to_rgb, batched=True)
```
Please use our Forum for... | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | 47 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError ... | [
-0.28589499,
0.0791946054,
-0.1641439646,
0.5192497373,
0.3978698254,
0.1942408681,
0.3166290522,
0.1429088861,
0.0455847755,
0.1703683138,
0.0783737227,
-0.0138087906,
-0.2019121945,
0.0991076156,
-0.0279719345,
-0.0610152707,
-0.1374426484,
0.2016349137,
-0.1800704598,
0.1053... |
https://github.com/huggingface/datasets/issues/3872 | HTTP error 504 Server Error: Gateway Time-out | yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`? | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib... | 18 | HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dat... | [
-0.4594691694,
-0.3423294127,
-0.0652929768,
0.1371454,
0.1435396522,
-0.0505814515,
-0.0194970034,
0.4111796021,
-0.3377404511,
-0.2412685603,
0.0478668362,
0.1072256789,
0.160477668,
0.4615725279,
0.0629627705,
0.0845943764,
0.0672830492,
-0.0497488081,
-0.0619742833,
0.04634... |
https://github.com/huggingface/datasets/issues/3872 | HTTP error 504 Server Error: Gateway Time-out | Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib... | 44 | HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dat... | [
-0.3648714423,
-0.41700387,
-0.0489485897,
0.2769415677,
0.0937842503,
0.0458760969,
-0.063487865,
0.3118470013,
-0.3693114519,
-0.2590149045,
-0.0208444595,
0.03183081,
0.1456531435,
0.4787338972,
0.0548850112,
0.0348590761,
0.0170293879,
-0.0898096934,
-0.0503739826,
0.008910... |
https://github.com/huggingface/datasets/issues/3872 | HTTP error 504 Server Error: Gateway Time-out | `push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.
Regarding `save_to_disk`, this must only be used ... | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib... | 93 | HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dat... | [
-0.3772861063,
-0.4499162734,
-0.0528412275,
0.1917434037,
0.0723698512,
-0.0209483299,
-0.0438145809,
0.3406634927,
-0.312343657,
-0.2927145958,
0.046363499,
0.1022304744,
0.2157945335,
0.4923057258,
0.000318272,
0.0172537398,
0.0521848463,
-0.0827601403,
-0.069590956,
0.00988... |
https://github.com/huggingface/datasets/issues/3869 | Making the Hub the place for datasets in Portuguese | Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many datasets, I ... | Let's make Hugging Face Datasets the central hub for datasets in Portuguese :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community.
What are some datasets in Portuguese worth ... | 114 | Making the Hub the place for datasets in Portuguese
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking co... | [
-0.131591633,
-0.1661401093,
-0.1073713973,
0.1489173174,
0.1300486773,
0.0739368051,
-0.0666677207,
0.1329706907,
-0.0110315746,
0.121694997,
-0.1681374013,
-0.1752085686,
-0.2794049382,
0.3597371578,
0.1305481941,
-0.048543185,
0.3864607513,
-0.0830101892,
-0.0624550879,
-0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.