html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/3735
Performance of `datasets` at scale
The most surprising part to me is the saving time. Wondering if it could be due to compression (`ParquetWriter` uses SNAPPY compression by default; it can be turned off with `to_parquet(..., compression=None)`).
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The da...
32
Performance of `datasets` at scale # Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance ...
[ -0.4659071565, -0.0334515944, -0.1042619422, 0.1815135777, 0.0901725069, 0.0411183946, 0.1751108021, 0.3004227877, 0.1423734725, -0.0201912615, 0.0761042461, 0.1921412945, -0.0361628421, 0.5749624968, -0.0226483662, -0.0665424839, -0.0241282955, 0.0048626824, -0.217030704, -0.0...
https://github.com/huggingface/datasets/issues/3735
Performance of `datasets` at scale
+1 to what @mariosasko mentioned. Also, @lvwerra did you parallelize `to_parquet` using similar approach in #2747? (we used multiprocessing at the shard level). I'm working on a similar PR to add multi_proc in `to_parquet` which might give you further speed up. Stas benchmarked his approach and mine in this [gist](ht...
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The da...
63
Performance of `datasets` at scale # Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance ...
[ -0.4659071565, -0.0334515944, -0.1042619422, 0.1815135777, 0.0901725069, 0.0411183946, 0.1751108021, 0.3004227877, 0.1423734725, -0.0201912615, 0.0761042461, 0.1921412945, -0.0361628421, 0.5749624968, -0.0226483662, -0.0665424839, -0.0241282955, 0.0048626824, -0.217030704, -0.0...
https://github.com/huggingface/datasets/issues/3735
Performance of `datasets` at scale
@mariosasko I did not turn it off but I can try the next time - I have to run the pipeline again, anyway. @bhavitvyamalik Yes, I also sharded the dataset and used multiprocessing to save each shard. I'll have a closer look at your approach, too.
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The da...
46
Performance of `datasets` at scale # Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance ...
[ -0.4659071565, -0.0334515944, -0.1042619422, 0.1815135777, 0.0901725069, 0.0411183946, 0.1751108021, 0.3004227877, 0.1423734725, -0.0201912615, 0.0761042461, 0.1921412945, -0.0361628421, 0.5749624968, -0.0226483662, -0.0665424839, -0.0241282955, 0.0048626824, -0.217030704, -0.0...
https://github.com/huggingface/datasets/issues/3730
Checksum Error when loading multi-news dataset
Thanks for reporting @byw2. We are fixing it. In the meantime, you can load the dataset by passing `ignore_verifications=True`: ```python dataset = load_dataset("multi_news", ignore_verifications=True)
## Describe the bug When using the load_dataset function from datasets module to load the Multi-News dataset, does not load the dataset but throws Checksum Error instead. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("multi_news") ``` ## Expected results ...
24
Checksum Error when loading multi-news dataset ## Describe the bug When using the load_dataset function from datasets module to load the Multi-News dataset, does not load the dataset but throws Checksum Error instead. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_data...
[ -0.2059844732, 0.119747147, -0.0531000271, 0.3288653195, 0.1786068529, 0.0489469506, 0.3487609625, 0.2491254061, 0.2209578454, 0.0042153173, -0.020120943, 0.1327264905, 0.0797101408, 0.2458918542, -0.1772808731, 0.0411296003, 0.2067429572, -0.1955725104, 0.1274431944, -0.031286...
https://github.com/huggingface/datasets/issues/3729
Wrong number of examples when loading a text dataset
Hi @kg-nlp, thanks for reporting. That is weird... I guess we would need some sample data file where this behavior appears to reproduce the bug for further investigation...
## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming...
28
Wrong number of examples when loading a text dataset ## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset...
[ -0.0605260655, -0.2417583913, -0.0344216228, 0.4838173389, 0.065916352, 0.0262386296, 0.5079740286, 0.0472354554, -0.023038568, 0.1874682903, 0.444887042, 0.1894766986, 0.0757768676, 0.0255927108, 0.3641293645, -0.2086357623, 0.2814244628, -0.0695903748, -0.1799233258, 0.001248...
https://github.com/huggingface/datasets/issues/3729
Wrong number of examples when loading a text dataset
ok, I found the reason why that two results are not same. there is /u2029 in the text, the datasets will split sentence according to the /u2029,but when I use open function will not do that . so I want to know which function shell do that thanks
## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming...
48
Wrong number of examples when loading a text dataset ## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset...
[ -0.1179775819, -0.3109657764, -0.0979499519, 0.530828476, -0.21858266, -0.2745325267, 0.4456655979, -0.0981146768, -0.0977490395, 0.2998822629, 0.1997930557, 0.1997048706, 0.2252021432, 0.0837190151, 0.4043927789, -0.2532298863, 0.3243586123, 0.1103495955, -0.3028640747, -0.301...
https://github.com/huggingface/datasets/issues/3720
Builder Configuration Update Required on Common Voice Dataset
Hi @aasem, thanks for reporting. Please note that currently Commom Voice is hosted on our Hub as a community dataset by the Mozilla Foundation. See all Common Voice versions here: https://huggingface.co/mozilla-foundation Maybe we should add an explaining note in our "legacy" Common Voice canonical script? What d...
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
52
Builder Configuration Update Required on Common Voice Dataset Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found....
[ -0.2856746018, 0.1739170104, 0.0023721412, -0.1521494836, 0.1797928214, 0.1795553714, 0.0361903384, 0.1813979447, -0.4638518989, 0.1273631901, 0.0928716883, -0.051489599, -0.0403752439, -0.2595183849, 0.2246842086, -0.0798487514, 0.0472483151, 0.0587806664, 0.0920704827, -0.162...
https://github.com/huggingface/datasets/issues/3720
Builder Configuration Update Required on Common Voice Dataset
Thank you, @albertvillanova, for the quick response. I am not sure about the exact flow but I guess adding the following lines under the `_Languages` dictionary definition in [common_voice.py](https://github.com/huggingface/datasets/blob/master/datasets/common_voice/common_voice.py) might resolve the issue. I guess the...
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
66
Builder Configuration Update Required on Common Voice Dataset Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found....
[ -0.2908358276, 0.0055588363, 0.0204208232, -0.0029063479, 0.2136821002, 0.1312917918, 0.0075951642, 0.2225896865, -0.3800753057, 0.2097702622, 0.121313192, -0.0567619801, -0.1200531498, -0.0659274682, 0.211689651, -0.2299364358, 0.0533253737, 0.1275778413, 0.1963362843, -0.1596...
https://github.com/huggingface/datasets/issues/3720
Builder Configuration Update Required on Common Voice Dataset
@aasem for compliance reasons, we are no longer updating the `common_voice.py` script. We agreed with Mozilla Foundation to use their community datasets instead, which will ask you to accept their terms of use: ``` You need to share your contact information to access this dataset. This repository is publicly ac...
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
192
Builder Configuration Update Required on Common Voice Dataset Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found....
[ -0.2701786458, 0.1846490055, 0.0216178335, -0.046714481, 0.2218092084, 0.1341910362, 0.0782260299, 0.2611940801, -0.3444994986, 0.2042435408, 0.0835882351, 0.0007256507, -0.0550360493, -0.2131644338, 0.1940619797, -0.1723414809, -0.0371397771, 0.0623359494, 0.2901751399, -0.179...
https://github.com/huggingface/datasets/issues/3720
Builder Configuration Update Required on Common Voice Dataset
@albertvillanova >Maybe we should add an explaining note in our "legacy" Common Voice canonical script? Yes, I agree we should have a deprecation notice in the canonical script to redirect users to the new script.
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
35
Builder Configuration Update Required on Common Voice Dataset Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found....
[ -0.3522530496, 0.103318274, 0.0201227553, -0.1326442063, 0.1710538417, 0.1720189899, 0.063082017, 0.2731883228, -0.3063312769, 0.1558266878, 0.1680767089, -0.0426007435, -0.1602720022, -0.0915840715, 0.1674870849, -0.1476795226, 0.1083246022, 0.1433466375, 0.1850374937, -0.1774...
https://github.com/huggingface/datasets/issues/3720
Builder Configuration Update Required on Common Voice Dataset
@albertvillanova, I now get the following error after downloading my access token from the huggingface and passing it to `load_dataset` call: `AttributeError: 'DownloadManager' object has no attribute 'download_config'` Any quick pointer on how it might be resolved?
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
37
Builder Configuration Update Required on Common Voice Dataset Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found....
[ -0.3459948599, 0.1322572082, 0.0547214374, 0.1285535246, 0.2149555683, 0.1292428076, -0.0509486012, 0.1270668805, -0.2447929829, 0.239728421, 0.0264988188, -0.1509607136, -0.1511382908, -0.0213900078, 0.2012620568, -0.1719041914, 0.0843130574, 0.1090747863, 0.2482061386, -0.146...
https://github.com/huggingface/datasets/issues/3720
Builder Configuration Update Required on Common Voice Dataset
@aasem What version of `datasets` are you using? We renamed that attribute from `_download_config` to `download_conig` fairly recently, so updating to the newest version should resolve the issue: ``` pip install -U datasets ```
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
34
Builder Configuration Update Required on Common Voice Dataset Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found....
[ -0.4302419722, 0.0144608077, -0.0066324342, -0.0191776697, 0.2999699414, 0.1326845139, -0.0474833548, 0.2533238828, -0.2436418384, 0.2547218204, 0.1182378829, 0.0407666378, -0.1010221541, -0.0704348087, 0.085390307, -0.2976481318, 0.0260496214, 0.1262313724, 0.197205618, -0.183...
https://github.com/huggingface/datasets/issues/3717
wrong condition in `Features ClassLabel encode_example`
Hi @Tudyx, Please note that in Python, the boolean NOT operator (`not`) has lower precedence than comparison operators (`<=`, `<`), thus the expression you mention is equivalent to: ```python not (-1 <= example_data < self.num_classes) ``` Also note that as expected, the exception is raised if: - `example_d...
## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}") ``` ## Expected results The `not - 1` co...
71
wrong condition in `Features ClassLabel encode_example` ## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_...
[ 0.1391999722, -0.1168350875, -0.0464337245, 0.2833454311, 0.2257674038, -0.0928371921, 0.2275467068, 0.2446250916, -0.1854133755, 0.0163679607, 0.3163581491, 0.3322761357, -0.2180692405, 0.4177816808, -0.1956312656, 0.0690538064, 0.1742625237, 0.2923476398, 0.3052038252, -0.146...
https://github.com/huggingface/datasets/issues/3708
Loading JSON gets stuck with many workers/threads
Hi ! Note that it does `block_size *= 2` until `block_size > len(batch)`, so it doesn't loop indefinitely. What do you mean by "get stuck indefinitely" then ? Is this the actual call to `paj.read_json` that hangs ? > increasing the `chunksize` argument decreases the chance of getting stuck Could you share the val...
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from dat...
78
Loading JSON gets stuck with many workers/threads ## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following...
[ -0.2772898972, -0.1832047999, -0.1026515365, 0.1906440854, 0.131996572, -0.0690689906, 0.4268065095, 0.2522391081, 0.291331321, 0.1814338863, 0.1807477623, 0.557315588, 0.1101846546, -0.2725525796, -0.0052468558, 0.0646460876, -0.0230275784, -0.1319034845, -0.0031667606, 0.1560...
https://github.com/huggingface/datasets/issues/3708
Loading JSON gets stuck with many workers/threads
To clarify, I don't think it loops indefinitely but the `paj.read_json` gets stuck after the first try. That's why I think it could be an issue with a lock somewhere. Using `load_dataset(..., chunksize=40<<20)` worked without errors.
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from dat...
36
Loading JSON gets stuck with many workers/threads ## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following...
[ -0.2772898972, -0.1832047999, -0.1026515365, 0.1906440854, 0.131996572, -0.0690689906, 0.4268065095, 0.2522391081, 0.291331321, 0.1814338863, 0.1807477623, 0.557315588, 0.1101846546, -0.2725525796, -0.0052468558, 0.0646460876, -0.0230275784, -0.1319034845, -0.0031667606, 0.1560...
https://github.com/huggingface/datasets/issues/3707
`.select`: unexpected behavior with `indices`
Hi! Currently, we compute the final index as `index % len(dset)`. I agree this behavior is somewhat unexpected and that it would be more appropriate to raise an error instead (this is what `df.iloc` in Pandas does, for instance). @albertvillanova @lhoestq wdyt?
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": [...
42
`.select`: unexpected behavior with `indices` ## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets i...
[ -0.1406795532, -0.2200402766, -0.0164005645, 0.4212484062, 0.0805128962, -0.0664442927, 0.3318194151, 0.1800907254, 0.2260111421, 0.352301985, 0.0550057329, 0.5035140514, 0.2490186989, 0.0551462993, -0.362167567, -0.0069085811, -0.0928678289, -0.0007304215, -0.1078659818, -0.22...
https://github.com/huggingface/datasets/issues/3707
`.select`: unexpected behavior with `indices`
I agree. I think `index % len(dset)` was used to support negative indices. I think this needs to be fixed in `datasets.formatting.formatting._check_valid_index_key` if I'm not mistaken
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": [...
26
`.select`: unexpected behavior with `indices` ## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets i...
[ -0.1581116766, -0.25665465, -0.0318490341, 0.3439429104, 0.0620099641, -0.1153479517, 0.2147520185, 0.1504350156, 0.0474207774, 0.1787699163, 0.0234423447, 0.4659059048, 0.1104499027, 0.2063288987, -0.2355961353, -0.0393909998, -0.0478052236, 0.1092761159, -0.0718755051, -0.206...
https://github.com/huggingface/datasets/issues/3706
Unable to load dataset 'big_patent'
Hi @ankitk2109, Have you tried passing the split name with the keyword `split=`? See e.g. an example in our Quick Start docs: https://huggingface.co/docs/datasets/quickstart.html#load-the-dataset-and-model ```python ds = load_dataset("big_patent", "d", split="validation")
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
29
Unable to load dataset 'big_patent' ## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFo...
[ -0.2909577489, -0.4436421692, -0.0362716764, 0.4938029945, 0.3471500874, 0.1680756658, 0.1101178601, 0.2599283755, 0.123266384, 0.0131376898, -0.3273753524, 0.0412844308, -0.0739786997, 0.4042496979, 0.2557625175, -0.3181647062, 0.0480562225, 0.0218535829, -0.0038350688, 0.0419...
https://github.com/huggingface/datasets/issues/3706
Unable to load dataset 'big_patent'
Hi @albertvillanova, Thanks for your response. Yes, I tried the `split='validation'` as well. But getting the same issue.
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
18
Unable to load dataset 'big_patent' ## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFo...
[ -0.2848466933, -0.0606216937, -0.0639390498, 0.575901866, 0.2154278904, 0.1228088588, 0.2310488224, 0.4681129456, 0.1615882516, 0.0266631283, -0.2342445105, 0.0454667844, -0.0157030988, 0.1677100509, 0.2608820796, -0.2648475766, 0.0229797028, 0.0599010997, 0.001682676, -0.01707...
https://github.com/huggingface/datasets/issues/3706
Unable to load dataset 'big_patent'
I'm sorry, but I can't reproduce your problem: ```python In [5]: ds = load_dataset("big_patent", "d", split="validation") Downloading and preparing dataset big_patent/d (download: 6.01 GiB, generated: 169.61 MiB, post-processed: Unknown size, total: 6.17 GiB) to .../.cache/big_patent/d/1.0.0/bdefa7c0b39fba8bba1c6331...
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
72
Unable to load dataset 'big_patent' ## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFo...
[ -0.3682878315, -0.0730192587, -0.0981643498, 0.4866812825, 0.2612976134, 0.1087043956, 0.1902627051, 0.4533425868, 0.1328501701, 0.0507911704, -0.1489647776, 0.0408353321, -0.0469963737, 0.1007054374, 0.2328069359, -0.209725529, -0.0155005781, 0.02328844, 0.0051829605, 0.006029...
https://github.com/huggingface/datasets/issues/3706
Unable to load dataset 'big_patent'
Maybe you had a connection issue while downloading the file and this was corrupted? Our cache system uses the file you downloaded first time. If so, you could try forcing redownload of the file with: ```python ds = load_dataset("big_patent", "d", split="validation", download_mode="force_redownload")
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
42
Unable to load dataset 'big_patent' ## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFo...
[ -0.3243025839, -0.0634645075, -0.0910803452, 0.4624112546, 0.2718907893, 0.1327952147, 0.1778512746, 0.3625104427, 0.2114627212, 0.013645092, -0.1294031888, 0.0310820155, -0.0108539835, 0.0591872036, 0.2250686884, -0.1928537786, -0.0490768813, 0.0843679681, 0.0163288545, 0.0109...
https://github.com/huggingface/datasets/issues/3706
Unable to load dataset 'big_patent'
I am able to download the dataset with ``` download_mode="force_redownload"```. As you mentioned it was an issue with the cached version which was failed earlier due to a network issue. I am closing the issue now, once again thank you.
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
40
Unable to load dataset 'big_patent' ## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFo...
[ -0.2900757492, -0.0335867479, -0.0612993911, 0.5121245384, 0.2357622385, 0.1199185178, 0.2026650012, 0.4182065725, 0.1493887901, -0.0536810383, -0.1301836371, -0.002370433, -0.0229688324, 0.0843351781, 0.3025264144, -0.2059617341, 0.029935997, 0.047047846, -0.026889408, -0.0148...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
Hi @adrianeboyd, thanks for reporting. There is indeed a bug in that community dataset: Line: ```python metadata_and_text_files = list(zip(metadata_files, text_files)) ``` should be replaced with ```python metadata_and_text_files = list(zip(sorted(metadata_files), sorted(text_files))) ``` I am going to c...
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
51
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
That fix is part of it, but it's clearly not the only issue. I also already contacted the OSCAR creators, but I reported it here because it looked like huggingface members were the main authors in the git history. Is there a better place to have reported this?
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
48
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
Hello, We've had an issue that could be linked to this one here: https://github.com/oscar-corpus/corpus/issues/15. I have been spot checking the source (`.txt`/`.jsonl`) files for a while, and have not found issues, especially in the start/end of corpora (but I conceed that more integration testing would be neces...
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
118
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
I'm happy to move the discussion to the other repo! Merely sorting the files only **maybe** fixes the processing of the first part. If the first part contains non-unix newlines, it will still be misaligned/truncated, and all the following parts will be truncated with incorrect text offsets and metadata due the offse...
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
55
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
Hi @Uinelj, This is a total noobs question but how can I integrate that bugfix into my code? I reinstalled the datasets library this time from source. Should that have fixed the issue? I am still facing the misalignment issue. Do I need to download the dataset from scratch?
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
49
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
Sorry @norakassner for the late reply. There are indeed several issues creating the misalignment, as @adrianeboyd cleverly pointed out: - https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/3cd7e95aa1799b73c5ea8afc3989635f3e19b86b fixed one of them - but there are still others to be fixed
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
34
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
Normally, the issues should be fixed now: - Fix offset initialization for each file: https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/1ad9b7bfe00798a9258a923b887bb1c8d732b833 - Disable default universal newline support: https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/0c2f307d3167f03632f50...
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
35
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3704
OSCAR-2109 datasets are misaligned and truncated
Thanks for the updates! The purist in me would still like to have the rstrip not strip additional characters from the original text (unicode whitespace mainly in practice, I think), but the differences are extremely small in practice and it doesn't actually matter for my current task: ```python text = "".join([t...
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
56
OSCAR-2109 datasets are misaligned and truncated ## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few exampl...
[ -0.3790073395, 0.3509358466, 0.0164461136, 0.5407814384, -0.1386849582, 0.0147254271, 0.2296267599, 0.3700642288, -0.3047654331, -0.0990702137, 0.0361928083, -0.0641971603, 0.1635019481, -0.0632026196, 0.0641614646, -0.2708623111, 0.1433134079, 0.0770913064, 0.0587200187, -0.09...
https://github.com/huggingface/datasets/issues/3703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
Hi! Some of our metrics require additional dependencies to work. In your case, simply installing the `seqeval` package with `pip install seqeval` should resolve the issue.
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File...
26
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py t...
[ -0.1979477853, -0.0261322893, -0.0460704267, 0.048714228, 0.2330710441, -0.0966798216, 0.3034621775, -0.0408191793, 0.1501882523, 0.2724968195, -0.0265025795, 0.4663179517, -0.117960006, -0.0697474778, 0.0039565205, 0.055459097, -0.221604079, 0.2347217053, 0.0374869257, -0.0331...
https://github.com/huggingface/datasets/issues/3703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
> Hi! Some of our metrics require additional dependencies to work. In your case, simply installing the `seqeval` package with `pip install seqeval` should resolve the issue. I installed seqeval, but still reported the same error. That's too bad.
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File...
39
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py t...
[ -0.2025142908, -0.0279516838, -0.046689935, 0.0476437584, 0.229351297, -0.1002444327, 0.3064735532, -0.0407144725, 0.1468651444, 0.276491046, -0.0252320133, 0.4661178589, -0.1189252585, -0.0669726729, -0.0000810218, 0.0616327114, -0.2240398824, 0.2324857861, 0.0310468934, -0.02...
https://github.com/huggingface/datasets/issues/3703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
> > Hi! Some of our metrics require additional dependencies to work. In your case, simply installing the `seqeval` package with `pip install seqeval` should resolve the issue. > > I installed seqeval, but still reported the same error. That's too bad. Same issue here. What should I do to fix this error? Please help...
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File...
57
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py t...
[ -0.2018062174, -0.027352469, -0.0475676991, 0.0484961011, 0.2304400206, -0.0999850258, 0.3054769635, -0.0413617231, 0.1490331292, 0.2779763043, -0.0248936471, 0.4655256569, -0.1186925694, -0.0632172748, -0.0003957157, 0.0598772839, -0.225105837, 0.2342484146, 0.0314481072, -0.0...
https://github.com/huggingface/datasets/issues/3703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
I tried to install **seqeval** package through anaconda instead of pip: `conda install -c conda-forge seqeval` It worked for me!
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File...
20
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py t...
[ -0.2087471485, 0.0372918658, -0.0465946794, 0.0723289177, 0.2412151247, -0.0558816716, 0.2564868331, -0.0605246983, 0.1419683248, 0.2192429155, -0.0591691583, 0.4464016259, -0.0956755057, -0.0059291921, -0.0359144136, 0.1483167112, -0.2069616914, 0.2406514287, -0.0371867791, 0....
https://github.com/huggingface/datasets/issues/3700
Unable to load a dataset
Hi! `load_dataset` is intended to be used to load a canonical dataset (`wikipedia`), a packaged dataset (`csv`, `json`, ...) or a dataset hosted on the Hub. For local datasets saved with `save_to_disk("path/to/dataset")`, use `load_from_disk("path/to/dataset")`.
## Describe the bug Unable to load a dataset from Huggingface that I have just saved. ## Steps to reproduce the bug On Google colab `! pip install datasets ` `from datasets import load_dataset` `my_path = "wiki_dataset"` `dataset = load_dataset('wikipedia', "20200501.fr")` `dataset.save_to_disk(my_path)` `...
34
Unable to load a dataset ## Describe the bug Unable to load a dataset from Huggingface that I have just saved. ## Steps to reproduce the bug On Google colab `! pip install datasets ` `from datasets import load_dataset` `my_path = "wiki_dataset"` `dataset = load_dataset('wikipedia', "20200501.fr")` `datase...
[ -0.2790259123, -0.2967556417, 0.0411759429, 0.6267769337, 0.3320978582, 0.1287171245, 0.2145056725, -0.0036905431, 0.3158269823, 0.100925535, -0.2397874296, 0.3568312526, -0.0890317336, 0.3122532368, 0.0596994087, -0.2918172181, 0.1328980625, -0.1646209657, -0.1491223723, 0.021...
https://github.com/huggingface/datasets/issues/3688
Pyarrow version error
Hi @Zaker237, thanks for reporting. This is weird: the error you get is only thrown if the installed pyarrow version is less than 3.0.0. Could you please check that you install pyarrow in the same Python virtual environment where you installed datasets? From the Python command line (or terminal) where you get ...
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed w...
64
Pyarrow version error ## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match thi...
[ -0.4256339073, 0.2237600088, -0.0069542294, 0.1207932457, 0.0590526015, 0.0466515869, 0.1877537966, 0.3738027811, -0.2632450461, -0.0977328941, 0.1039726883, 0.30188784, -0.1692364514, -0.0959769338, -0.0230374504, -0.1642787755, 0.3303413987, 0.1971314251, -0.0938647017, 0.050...
https://github.com/huggingface/datasets/issues/3688
Pyarrow version error
hi @albertvillanova i try yesterday to create a new python environement with python 7 and try it on the environement and it worked. so i think that the error was not the package but may be jupyter notebook on conda. still yet i'm not yet sure but it worked in an environment created with venv
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed w...
55
Pyarrow version error ## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match thi...
[ -0.4256339073, 0.2237600088, -0.0069542294, 0.1207932457, 0.0590526015, 0.0466515869, 0.1877537966, 0.3738027811, -0.2632450461, -0.0977328941, 0.1039726883, 0.30188784, -0.1692364514, -0.0959769338, -0.0230374504, -0.1642787755, 0.3303413987, 0.1971314251, -0.0938647017, 0.050...
https://github.com/huggingface/datasets/issues/3688
Pyarrow version error
OK, thanks @Zaker237 for your feedback. I close this issue then. Please, feel free to reopen it if the problem arises again.
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed w...
22
Pyarrow version error ## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match thi...
[ -0.4256339073, 0.2237600088, -0.0069542294, 0.1207932457, 0.0590526015, 0.0466515869, 0.1877537966, 0.3738027811, -0.2632450461, -0.0977328941, 0.1039726883, 0.30188784, -0.1692364514, -0.0959769338, -0.0230374504, -0.1642787755, 0.3303413987, 0.1971314251, -0.0938647017, 0.050...
https://github.com/huggingface/datasets/issues/3687
Can't get the text data when calling to_tf_dataset
You are correct that `to_tf_dataset` only handles numerical columns right now, yes, though this is a limitation we might remove in future! The main reason we do this is that our models mostly do not include the tokenizer as a model layer, because it's very difficult to compile some of them in TF. So the "normal" Huggin...
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = Defa...
96
Can't get the text data when calling to_tf_dataset I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transforme...
[ 0.306876719, -0.2420510054, 0.0682803616, 0.3844340146, 0.311304301, 0.1206924245, 0.3427685201, 0.4319320321, -0.1418879032, 0.1407022178, -0.0737772956, -0.0379524305, 0.037617296, 0.3645371497, 0.1628638953, -0.2193426192, 0.0983592644, 0.1495360136, -0.3627763391, -0.218728...
https://github.com/huggingface/datasets/issues/3687
Can't get the text data when calling to_tf_dataset
Thanks for the quick follow-up to my issue. For my use-case, instead of the built-in tokenizers I wanted to use the `TextVectorization` layer to map from strings to integers. To achieve this, I came up with the following solution: ``` from datasets import load_dataset from transformers import DefaultDataCollato...
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = Defa...
212
Can't get the text data when calling to_tf_dataset I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transforme...
[ 0.306876719, -0.2420510054, 0.0682803616, 0.3844340146, 0.311304301, 0.1206924245, 0.3427685201, 0.4319320321, -0.1418879032, 0.1407022178, -0.0737772956, -0.0379524305, 0.037617296, 0.3645371497, 0.1628638953, -0.2193426192, 0.0983592644, 0.1495360136, -0.3627763391, -0.218728...
https://github.com/huggingface/datasets/issues/3687
Can't get the text data when calling to_tf_dataset
> For the future, however, it'd be more convenient to get the string data, since I am also inspecting the dataset (longest sentence, shortest sentence), which is more challenging when working with integer or float. Yes, I agree, so let's keep this issue open.
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = Defa...
44
Can't get the text data when calling to_tf_dataset I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transforme...
[ 0.306876719, -0.2420510054, 0.0682803616, 0.3844340146, 0.311304301, 0.1206924245, 0.3427685201, 0.4319320321, -0.1418879032, 0.1407022178, -0.0737772956, -0.0379524305, 0.037617296, 0.3645371497, 0.1628638953, -0.2193426192, 0.0983592644, 0.1495360136, -0.3627763391, -0.218728...
https://github.com/huggingface/datasets/issues/3686
`Translation` features cannot be `flatten`ed
Thanks for reporting, @SBrandeis! Some additional feature types that don't behave as expected when flattened: `Audio`, `Image` and `TranslationVariableLanguages`
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to...
19
`Translation` features cannot be `flatten`ed ## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/data...
[ 0.0999352336, -0.5199523568, 0.0124358097, 0.3275805712, 0.3463025689, 0.0602360927, 0.3220860362, 0.1855656654, 0.0887600854, 0.1518424451, 0.0338772871, 0.6146896482, 0.2447858155, 0.2951707542, -0.2488523573, -0.3565292656, 0.170593679, 0.0839366689, -0.1575884372, -0.081798...
https://github.com/huggingface/datasets/issues/3679
Download datasets from a private hub
Hi ! For information one can set the environment variable `HF_ENDPOINT` (default is `https://huggingface.co`) if they want to use a private hub. We may need to coordinate with the other libraries to have a consistent way of changing the hub endpoint
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local s...
41
Download datasets from a private hub In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the r...
[ -0.4606108665, 0.0771518275, 0.0306206644, 0.1708162427, -0.211316064, -0.1057054624, 0.5121506453, 0.2544353306, 0.4819358885, 0.2293823361, -0.5677485466, 0.197627902, 0.2814869881, 0.2804977596, 0.2473968565, 0.0667531788, 0.0141825834, 0.3287018239, -0.157313481, -0.2259977...
https://github.com/huggingface/datasets/issues/3677
Discovery cannot be streamed anymore
Seems like a regression from https://github.com/huggingface/datasets/pull/2843 Or maybe it's an issue with the hosting. I don't think so, though, because https://www.dropbox.com/s/aox84z90nyyuikz/discovery.zip seems to work as expected
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first ...
26
Discovery cannot be streamed anymore ## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) `...
[ -0.3655185997, -0.0733559355, -0.032282766, -0.0026658794, 0.150663197, -0.1250083745, 0.1500283629, 0.4487424493, -0.0735848546, 0.1144562662, -0.2661015987, 0.3279671073, -0.2113558352, 0.1654202789, 0.2201966941, -0.1466775239, -0.0193212759, 0.0676078498, -0.0802748948, -0....
https://github.com/huggingface/datasets/issues/3677
Discovery cannot be streamed anymore
Hi @severo, thanks for reporting. Some servers do not support HTTP range requests, and those are required to stream some file formats (like ZIP in this case). Let me try to propose a workaround.
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first ...
34
Discovery cannot be streamed anymore ## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) `...
[ -0.3655185997, -0.0733559355, -0.032282766, -0.0026658794, 0.150663197, -0.1250083745, 0.1500283629, 0.4487424493, -0.0735848546, 0.1144562662, -0.2661015987, 0.3279671073, -0.2113558352, 0.1654202789, 0.2201966941, -0.1466775239, -0.0193212759, 0.0676078498, -0.0802748948, -0....
https://github.com/huggingface/datasets/issues/3676
`None` replaced by `[]` after first batch in map
It looks like this is because of this behavior in pyarrow: ```python import pyarrow as pa arr = pa.array([None, [0]]) reconstructed_arr = pa.ListArray.from_arrays(arr.offsets, arr.values) print(reconstructed_arr.to_pylist()) # [[], [0]] ``` It seems that `arr.offsets` can reconstruct the array properly, but...
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
89
`None` replaced by `[]` after first batch in map Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # ...
[ -0.1136860177, -0.191415295, -0.0299211461, 0.0782203823, 0.4104258418, -0.0416076481, 0.6156228781, 0.0721306652, -0.0982016549, 0.1066264659, -0.1036111191, 0.3490240872, 0.1736710072, -0.4175100029, -0.1059341058, 0.028131295, 0.1095845029, 0.4044575393, -0.1162986234, -0.06...
https://github.com/huggingface/datasets/issues/3676
`None` replaced by `[]` after first batch in map
The offsets don't have nulls because they don't include the validity bitmap from `arr.buffers()[0]`, which is used to say which values are null and which values are non-null. Though the validity bitmap also seems to be wrong: ```python bin(int(arr.buffers()[0].hex(), 16)) # '0b10' # it should be 0b110 - 1 corres...
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
158
`None` replaced by `[]` after first batch in map Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # ...
[ -0.0358240791, -0.2432300597, -0.0416246466, 0.1726666093, 0.3844224215, 0.0432761833, 0.6972999573, 0.2239293009, -0.0032909082, 0.144280538, -0.0846433192, 0.3390791714, 0.0880538225, -0.2586378157, -0.1205720678, -0.0456771702, 0.1046745926, 0.4155332446, -0.1235405356, -0.0...
https://github.com/huggingface/datasets/issues/3676
`None` replaced by `[]` after first batch in map
FYI the behavior is the same with: - `datasets` version: 1.18.3 - Platform: Linux-5.8.0-50-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.11 - PyArrow version: 6.0.1 but not with: - `datasets` version: 1.8.0 - Platform: Linux-4.18.0-305.40.2.el8_4.x86_64-x86_64-with-redhat-8.4-Ootpa - Python ...
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
57
`None` replaced by `[]` after first batch in map Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # ...
[ -0.120345071, -0.4828740358, -0.0233135875, 0.090897873, 0.4305691123, 0.0681472272, 0.6334287524, 0.1448312998, 0.2020837963, 0.1019381881, -0.0855532065, 0.3341479897, 0.1260546744, -0.1297750026, -0.2391028702, -0.0439484753, 0.1119943112, 0.2894803882, -0.2681235969, -0.047...
https://github.com/huggingface/datasets/issues/3676
`None` replaced by `[]` after first batch in map
Thanks for the insights @PaulLerner ! I found a way to workaround this issue for the code example presented in this issue. Note that empty lists will still appear when you explicitly `cast` a list of lists that contain None values like [None, [0]] to a new feature type (e.g. to change the integer precision). In t...
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
87
`None` replaced by `[]` after first batch in map Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # ...
[ -0.0742934793, -0.4457454085, -0.0595099702, 0.0777366906, 0.5046238899, 0.0152352257, 0.6655239463, 0.2754406631, 0.278647393, 0.2214797139, -0.0979031622, 0.2897973359, 0.0528755337, 0.0477773137, -0.3361816406, -0.1497146189, 0.0821734816, 0.3947313726, -0.247402519, -0.1306...
https://github.com/huggingface/datasets/issues/3676
`None` replaced by `[]` after first batch in map
Hi! I feel like I’m missing something in your answer, *what* is the workaround? Is it fixed in some `datasets` version?
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
21
`None` replaced by `[]` after first batch in map Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # ...
[ -0.1706128418, -0.456341058, -0.0308786687, 0.0773481429, 0.3964784145, 0.0354282483, 0.6240733266, 0.0938024968, 0.2293993235, 0.1052416265, -0.02331887, 0.3047322631, 0.1261317134, -0.0278188307, -0.2588295341, -0.0395659097, 0.1031301022, 0.330183208, -0.2813706994, -0.02351...
https://github.com/huggingface/datasets/issues/3676
`None` replaced by `[]` after first batch in map
`pa.ListArray.from_arrays` returns empty lists instead of None values. The workaround I added inside `datasets` simply consists in not using `pa.ListArray.from_arrays` :) Once this PR [here ](https://github.com/huggingface/datasets/pull/4282)is merged, we'll release a new version of `datasets` that currectly returns...
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
47
`None` replaced by `[]` after first batch in map Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # ...
[ -0.1458847523, -0.4942373335, -0.0464127921, 0.1250201464, 0.454267323, 0.0150745008, 0.5315796137, 0.1924663633, 0.2851714194, 0.1692066044, -0.1701225042, 0.3427030444, 0.0828172192, -0.0368684493, -0.2374550849, -0.1015476733, 0.0718248561, 0.4054085612, -0.3310920596, -0.08...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
Yes, we decided to replace the encoded label with the corresponding label when possible in the dataset viewer. But 1. maybe it's the wrong default 2. we could find a way to show both (with a switch, or showing both ie. `0 (neutral)`).
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
43
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ -0.039305117, -0.1823725253, -0.0017421179, 0.4216586053, 0.0142072439, -0.0808521807, 0.5498264432, 0.0203979071, 0.2802698314, 0.2657594085, -0.4218340218, 0.5688483119, 0.1886245906, 0.1806402951, -0.0699716955, 0.1307980418, 0.302164346, 0.3575282693, 0.1494957656, -0.34412...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
Hi @severo, Thanks for clarifying. I think this default is a bit counterintuitive for the user. However, this is a personal opinion that might not be general. I think it is nice to have the actual (non-encoded) labels in the viewer. On the other hand, it would be nice to match what the user sees with what they g...
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
103
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ -0.1473938376, -0.2301128209, 0.0229258053, 0.3064574599, 0.105483681, -0.079607524, 0.6571475267, 0.0317922123, 0.2773874998, 0.3631367683, -0.294644922, 0.5051845908, 0.1723775268, 0.1997693926, -0.1526102424, 0.0767769217, 0.2159674466, 0.2978663743, 0.1306685358, -0.3374619...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
Thanks for the 👍 in https://github.com/huggingface/datasets/issues/3673#issuecomment-1029008349 @mariosasko @gary149 @pietrolesci, but as I proposed various solutions, it's not clear to me which you prefer. Could you write your preferences as a comment? _(note for myself: one idea per comment in the future)_
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
41
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ -0.1536538899, -0.2408145666, -0.0024182508, 0.4605814815, 0.0789478496, -0.0453554243, 0.5275521278, 0.0667689666, 0.3200162947, 0.2510012984, -0.4140202403, 0.5160106421, 0.1262125075, 0.3489980698, -0.0521481708, -0.0057207183, 0.2405710816, 0.2476889938, 0.1201811507, -0.23...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
As I am working with seq2seq, I prefer having the label in string form rather than numeric. So the viewer is fine and the underlying dataset should be "decoded" (from int to str). In this way, the user does not have to search for a mapping `int -> original name` (even though is trivial to find, I reckon). Also, encodin...
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
69
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ 0.0364709161, 0.0039341887, 0.0107738748, 0.4978532493, 0.0224968474, -0.0208923295, 0.4548290372, 0.1088768616, 0.1855095774, 0.0503162295, -0.4681448042, 0.4753015935, 0.0338534489, 0.160325557, -0.026979344, -0.004192648, 0.2986780405, 0.1782975048, 0.131427139, -0.321761727...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
I like the idea of "0 (neutral)". The label name can even be greyed to make it clear that it's not part of the actual item in the dataset, it's just the meaning.
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
33
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ -0.0656780526, -0.2434972227, -0.0010202014, 0.4355776608, 0.0055615716, -0.0518786833, 0.5364513397, 0.0190292206, 0.2789101601, 0.2169718742, -0.3383785486, 0.5619642138, 0.0860194415, 0.1822406203, -0.1028691158, 0.0764553472, 0.2609826922, 0.3385197818, 0.2346190661, -0.386...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
Proposals by @gary149. Which one do you prefer? Please vote with the thumbs - 👍 ![image](https://user-images.githubusercontent.com/1676121/152387949-883c7d7e-a9f3-48aa-bff9-11a691555e6e.png) - 👎 ![image (1)](https://user-images.githubusercontent.com/1676121/152388061-32d95e42-cade-4ae4-9a77-7365...
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
20
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ -0.130245626, -0.1830823123, -0.0252133906, 0.5175071359, -0.0003679443, -0.049284637, 0.5257972479, 0.088769637, 0.2461693287, 0.1868774444, -0.3800661862, 0.5411986113, 0.1030181944, 0.2068627775, -0.0096952422, 0.0477102585, 0.31226933, 0.2211066931, 0.166265741, -0.32867163...
https://github.com/huggingface/datasets/issues/3673
`load_dataset("snli")` is different from dataset viewer
It's [live](https://huggingface.co/datasets/glue/viewer/cola/train): <img width="1126" alt="Capture d’écran 2022-02-14 à 10 26 03" src="https://user-images.githubusercontent.com/1676121/153836716-25f6205b-96af-42d8-880a-7c09cb24c420.png"> Thanks all for the help to improve the UI!
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
21
`load_dataset("snli")` is different from dataset viewer ## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded...
[ -0.0064431694, -0.2428971231, 0.0197080467, 0.6077648401, 0.0732069612, 0.0053166631, 0.5198222995, 0.0131179905, 0.2602276802, 0.2141701728, -0.4702810645, 0.5367407203, 0.1597673297, 0.2426183671, 0.0053447862, -0.056369748, 0.2554402649, 0.1189139485, 0.0553383604, -0.307682...
https://github.com/huggingface/datasets/issues/3668
Couldn't cast array of type string error with cast_column
Hi ! I wasn't able to reproduce the error, are you still experiencing this ? I tried calling `cast_column` on a string column containing paths. If you manage to share a reproducible code example that would be perfect
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264...
38
Couldn't cast array of type string error with cast_column ## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get er...
[ -0.2133601904, -0.2731957436, 0.0802019238, 0.0617938526, 0.5655031204, -0.1390096396, 0.436532706, 0.4131578207, -0.0390901379, 0.0874474719, -0.2246664315, 0.2103304267, -0.1993308216, 0.2030956447, 0.0246877912, -0.5783651471, 0.253187716, 0.0518443175, -0.2568005919, -0.171...
https://github.com/huggingface/datasets/issues/3668
Couldn't cast array of type string error with cast_column
Hi, I think my team mate got this solved. Clolsing it for now and will reopen if I experience this again. Thanks :)
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264...
23
Couldn't cast array of type string error with cast_column ## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get er...
[ -0.2133601904, -0.2731957436, 0.0802019238, 0.0617938526, 0.5655031204, -0.1390096396, 0.436532706, 0.4131578207, -0.0390901379, 0.0874474719, -0.2246664315, 0.2103304267, -0.1993308216, 0.2030956447, 0.0246877912, -0.5783651471, 0.253187716, 0.0518443175, -0.2568005919, -0.171...
https://github.com/huggingface/datasets/issues/3668
Couldn't cast array of type string error with cast_column
Hi @R4ZZ3, If it is not too much of a bother, can you please help me how to resolve this error? I am exactly getting the same error where I am going as per the documentation guideline: `my_audio_dataset = my_audio_dataset.cast_column("audio_paths", Audio())` where `"audio_paths"` is a dataset column (feature) ...
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264...
59
Couldn't cast array of type string error with cast_column ## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get er...
[ -0.2133601904, -0.2731957436, 0.0802019238, 0.0617938526, 0.5655031204, -0.1390096396, 0.436532706, 0.4131578207, -0.0390901379, 0.0874474719, -0.2246664315, 0.2103304267, -0.1993308216, 0.2030956447, 0.0246877912, -0.5783651471, 0.253187716, 0.0518443175, -0.2568005919, -0.171...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
Having talked to @lhoestq, I see that this feature is no longer supported. I really don't think this was a good idea. It is a major breaking change and one for which we don't even have a working solution at the moment, which is bad for PyTorch as we don't want to force people to have `datasets` decode audio files a...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
522
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.3595845699, -0.1635094881, 0.1092512086, 0.0924280137, 0.1691350341, -0.2649868131, 0.5073243976, 0.045521304, -0.0868790895, 0.361639291, -0.4557595253, 0.603625834, -0.1872879267, -0.473216027, -0.0141673023, -0.2059758008, -0.132776767, 0.1799086481, -0.2200371027, -0.105...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
Agree that we need to have access to the original sound files. Few days ago I was looking for these original files because I suspected there is bug in the audio resampling (confirmed in https://github.com/huggingface/datasets/issues/3662) and I want to do my own resampling to workaround the bug, which is now not possib...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
61
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.223546952, -0.2508076131, 0.05817727, 0.3293782771, 0.1734077185, -0.2104955465, 0.2099265903, 0.1210181266, -0.1521227211, 0.3673792481, -0.6560894847, 0.3635477126, -0.1372639686, -0.4980188608, 0.088632971, -0.1008796915, -0.1105357111, 0.1237626299, -0.3077791035, -0.199...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
@patrickvonplaten > The other solution of providing a path-like object derived from the bytes stocked in the .array file is not nearly as user-friendly, but better than nothing Just to clarify, here you describe the approach that uses the `Audio.decode` attribute to access the underlying bytes? > The official e...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
180
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.2384494841, -0.2499229908, -0.0043612858, 0.2015871257, 0.3130870461, -0.2325712591, 0.3949717879, 0.1109661087, -0.064451009, 0.4705465734, -0.3877643347, 0.6395121217, -0.213687554, -0.4975000322, -0.006227457, -0.0945753008, -0.0755985528, 0.1740210056, -0.114512749, -0.1...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
Related to this discussion: in https://github.com/huggingface/datasets/pull/3664#issuecomment-1031866858 I propose how we could change `iter_archive` to work for streaming and also return local paths (as it used too !). I'd love your opinions on this
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
33
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.4006457627, -0.1878706217, 0.1284465492, 0.3023985028, 0.1465083212, -0.3337064981, 0.3811364472, 0.1097427011, 0.0484992489, 0.2912321091, -0.4721405208, 0.4858804047, -0.2812137902, -0.2401287556, -0.0488462523, -0.1721184999, -0.1430081129, 0.2351565212, -0.2137909979, -0...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
> @patrickvonplaten > > > The other solution of providing a path-like object derived from the bytes stocked in the .array file is not nearly as user-friendly, but better than nothing > > Just to clarify, here you describe the approach that uses the `Audio.decode` attribute to access the underlying bytes? Yes! ...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
436
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.2332063019, -0.2523464262, -0.0081091225, 0.198170796, 0.3064035177, -0.2232055962, 0.3953643143, 0.1080420464, -0.0817282945, 0.4623280466, -0.3875495791, 0.649338007, -0.1950621903, -0.5039452314, -0.0033025276, -0.0967946947, -0.076763548, 0.182916835, -0.1241675019, -0.1...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
From https://github.com/huggingface/datasets/pull/3736 the Common Voice dataset now gives access to the local audio files as before
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
16
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.3010433912, -0.3545651436, 0.0553561524, 0.2282626331, 0.2375388891, -0.1576320231, 0.398414731, 0.0913803801, -0.0292463377, 0.413230598, -0.5055302382, 0.4944496453, -0.0978792682, -0.3798796237, -0.0493435375, -0.1475427896, -0.1041096151, 0.1031630784, -0.1670285612, -0....
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
I understand the argument that it is bad to have a breaking change. How to deal with the introduction of breaking changes is a topic of its own and not sure how you want to deal with that (or is the policy this is never allowed, and there must be a `load_dataset_v2` or so if you really want to introduce a breaking chan...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
336
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.4296576679, -0.1828013211, 0.0187264718, 0.0162173491, 0.1277173012, -0.2241308838, 0.3338643312, 0.1378044784, -0.1267743111, 0.3225070238, -0.3179305792, 0.4960876107, -0.1752758026, -0.5265328288, -0.1752555221, -0.1240886301, 0.0412429906, 0.1780788004, -0.1738434881, -0...
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
Thanks for your comments here @albertz - cool to get your input! Answering a bit here between the lines: > I understand the argument that it is bad to have a breaking change. How to deal with the introduction of breaking changes is a topic of its own and not sure how you want to deal with that (or is the policy ...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
561
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.3888449073, -0.2048083842, -0.0107918447, 0.0832692236, 0.1475567073, -0.2603466809, 0.3531104624, 0.1652146578, -0.149776876, 0.3146067262, -0.321033597, 0.4920407534, -0.1756598055, -0.5001469851, -0.1695511192, -0.1574175805, 0.0101251258, 0.1609356403, -0.1775410771, -0....
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
> The problem with decoding on the fly is that we currently rely on `torchaudio` for this now which relies on `torch` which is not necessarily something people would like to install when using `tensorflow` or `flax`. Therefore we cannot just rely on people using the decoding on the fly method. We just didn't find a lib...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
435
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.2726772428, -0.2887236774, 0.046659857, 0.1798033714, 0.3206547201, -0.1839729697, 0.3868455589, 0.2177908868, -0.0942911953, 0.493417412, -0.4398248792, 0.5807744861, -0.0999841765, -0.3980669081, -0.0435631797, -0.2208977491, -0.1139942482, 0.1357062757, -0.0746749267, -0....
https://github.com/huggingface/datasets/issues/3663
[Audio] Path of Common Voice cannot be used for audio loading anymore
> In https://github.com/huggingface/datasets/pull/4184#issuecomment-1105191639, you said/proposed that this map is not needed anymore and save_to_disk could do it automatically (maybe via some option)? Yes! Should be super easy now see discussion here: https://github.com/rwth-i6/i6_core/issues/257#issuecomment-11054...
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
39
[Audio] Path of Common Voice cannot be used for audio loading anymore ## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(d...
[ -0.3018416464, -0.3316687346, 0.0994555578, 0.2540639341, 0.2378304303, -0.056966491, 0.3418720067, 0.1687907577, 0.1279357821, 0.3722269833, -0.4818680584, 0.5557988286, -0.1712995917, -0.3390398026, -0.0467872694, -0.0917301998, -0.042958308, 0.1105744019, -0.1979098022, -0.1...
https://github.com/huggingface/datasets/issues/3662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
Thanks @lhoestq for finding the reason of incorrect resampling. This issue affects all languages which have sound files with different sampling rates such as Turkish and Luganda.
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
27
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio file...
[ -0.1460943371, 0.0788260102, -0.0020535996, 0.2036607414, 0.1585873067, -0.1734844446, -0.1209069043, 0.0837717876, -0.3985755146, 0.2145069987, -0.4509890676, 0.3006875515, 0.1822164655, -0.4004625976, -0.622135818, -0.1660757512, -0.0155577129, 0.1286702156, -0.2838036716, -0...
https://github.com/huggingface/datasets/issues/3662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
@cahya-wirawan - do you know how many languages have different sampling rates in Common Voice? I'm quite surprised to see this for multiple languages actually
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
25
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio file...
[ -0.1460943371, 0.0788260102, -0.0020535996, 0.2036607414, 0.1585873067, -0.1734844446, -0.1209069043, 0.0837717876, -0.3985755146, 0.2145069987, -0.4509890676, 0.3006875515, 0.1822164655, -0.4004625976, -0.622135818, -0.1660757512, -0.0155577129, 0.1286702156, -0.2838036716, -0...
https://github.com/huggingface/datasets/issues/3662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
@cahya-wirawan, I can reproduce the problem for Common Voice 7 for Turkish. Here a script you can use: ```python #!/usr/bin/env python3 from datasets import load_dataset import torchaudio from io import BytesIO from datasets import Audio from collections import Counter import sys ds_name = str(sys.argv[1...
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
92
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio file...
[ -0.1460943371, 0.0788260102, -0.0020535996, 0.2036607414, 0.1585873067, -0.1734844446, -0.1209069043, 0.0837717876, -0.3985755146, 0.2145069987, -0.4509890676, 0.3006875515, 0.1822164655, -0.4004625976, -0.622135818, -0.1660757512, -0.0155577129, 0.1286702156, -0.2838036716, -0...
https://github.com/huggingface/datasets/issues/3662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
It actually shows that many more samples are in 32kHz format than it 48kHz which is unexpected. Thanks a lot for flagging! Will contact Common Voice about this as well
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
30
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio file...
[ -0.1460943371, 0.0788260102, -0.0020535996, 0.2036607414, 0.1585873067, -0.1734844446, -0.1209069043, 0.0837717876, -0.3985755146, 0.2145069987, -0.4509890676, 0.3006875515, 0.1822164655, -0.4004625976, -0.622135818, -0.1660757512, -0.0155577129, 0.1286702156, -0.2838036716, -0...
https://github.com/huggingface/datasets/issues/3662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
I only checked the CV 7.0 for Turkish, Luganda and Indonesian, they have audio files with difference sampling rates, and all of them are affected by this issue. Percentage of incorrect resampling as follow, Turkish: 9.1%, Luganda: 88.2% and Indonesian: 64.1%. I checked it using the original CV files. I check the origi...
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
113
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio file...
[ -0.1460943371, 0.0788260102, -0.0020535996, 0.2036607414, 0.1585873067, -0.1734844446, -0.1209069043, 0.0837717876, -0.3985755146, 0.2145069987, -0.4509890676, 0.3006875515, 0.1822164655, -0.4004625976, -0.622135818, -0.1660757512, -0.0155577129, 0.1286702156, -0.2838036716, -0...
https://github.com/huggingface/datasets/issues/3662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
Ok wow, thanks a lot for checking this - you've found a pretty big bug :sweat_smile: It seems like **a lot** more datasets are actually affected than I original thought. We'll try to solve this as soon as possible and make an announcement tomorrow.
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
44
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio file...
[ -0.1460943371, 0.0788260102, -0.0020535996, 0.2036607414, 0.1585873067, -0.1734844446, -0.1209069043, 0.0837717876, -0.3985755146, 0.2145069987, -0.4509890676, 0.3006875515, 0.1822164655, -0.4004625976, -0.622135818, -0.1660757512, -0.0155577129, 0.1286702156, -0.2838036716, -0...
https://github.com/huggingface/datasets/issues/3659
push_to_hub but preview not working
Hi @thomas-happify, please note that the preview may take some time before rendering the data. I've seen it is already working. I close this issue. Please feel free to reopen it if the problem arises again.
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
36
push_to_hub but preview not working ## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the ...
[ -0.1996080726, -0.6430014968, 0.0241144355, 0.0323767886, -0.050666552, 0.0509806834, 0.266633153, 0.1995358914, 0.2861259878, 0.1242960989, -0.081287384, 0.1542447805, -0.1040724963, 0.2985954881, 0.3564328849, 0.137171343, 0.3512623012, 0.1522511393, 0.1213507354, -0.06100377...
https://github.com/huggingface/datasets/issues/3658
Dataset viewer issue for *P3*
The error is now: ``` Status code: 400 Exception: Status400Error Message: this dataset is not supported for now. ``` We've disabled the dataset viewer for several big datasets like this one. We hope being able to reenable it soon.
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
39
Dataset viewer issue for *P3* ## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No The...
[ -0.3627389371, -0.2553356886, -0.0034183236, 0.2525382042, 0.0948752537, 0.0368077792, 0.1684093922, 0.4760468304, -0.0547437407, 0.257822305, -0.3304268718, 0.4156095982, -0.0459340438, 0.3703284264, -0.1312036514, -0.2816922069, 0.0096168621, 0.1861492097, 0.1762863547, 0.209...
https://github.com/huggingface/datasets/issues/3656
checksum error subjqa dataset
Hi @RensDimmendaal, I'm sorry but I can't reproduce your bug: ```python In [1]: from datasets import load_dataset ...: ds = load_dataset("subjqa", "electronics") Downloading builder script: 9.15kB [00:00, 4.10MB/s] ...
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` ---...
174
checksum error subjqa dataset ## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ...
[ -0.1055608839, 0.197503075, 0.0432933308, 0.2761413455, 0.1469845176, 0.0702437833, 0.4774221778, 0.2563091815, 0.0278039947, 0.1766105443, -0.1053530648, 0.1167671308, 0.0757053718, 0.0232625119, -0.0491603985, -0.0322394222, 0.1780377179, 0.0984125584, -0.1423467547, 0.086216...
https://github.com/huggingface/datasets/issues/3656
checksum error subjqa dataset
Thanks checking! You're totally right. I don't know what's changed, but I'm glad it's working now!
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` ---...
16
checksum error subjqa dataset ## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ...
[ -0.1055608839, 0.197503075, 0.0432933308, 0.2761413455, 0.1469845176, 0.0702437833, 0.4774221778, 0.2563091815, 0.0278039947, 0.1766105443, -0.1053530648, 0.1167671308, 0.0757053718, 0.0232625119, -0.0491603985, -0.0322394222, 0.1780377179, 0.0984125584, -0.1423467547, 0.086216...
https://github.com/huggingface/datasets/issues/3655
Pubmed dataset not reachable
Hey @albertvillanova, sorry to reopen this... I can confirm that on `master` branch the dataset is downloadable now but it is still broken in streaming mode: ```python >>> import datasets >>> pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True) >>> next(iter(pubmed_train)) ``` ``` ...
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionEr...
47
Pubmed dataset not reachable ## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Ac...
[ -0.2074099481, 0.1169083342, 0.0022661486, 0.0530288666, 0.3004156053, 0.0551192351, 0.2131823301, 0.3712816238, -0.0309725199, 0.1522594094, 0.0972943529, 0.1235697567, -0.0315114409, -0.0394265056, 0.0356219187, -0.2128066719, 0.0310951956, 0.1027326286, -0.1332086027, -0.025...
https://github.com/huggingface/datasets/issues/3655
Pubmed dataset not reachable
Hi @abhi-mosaic, would you mind opening another issue for this new problem? First issue (already solved) was a ConnectionError due to the yearly update release of PubMed: we fixed it by updating the URLs from year 2021 to year 2022. However this is another problem: to make pubmed streamable. Please note that NOT ...
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionEr...
74
Pubmed dataset not reachable ## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Ac...
[ -0.1536322087, 0.2288036197, 0.0192875322, -0.0351097062, 0.3287482262, -0.0340981595, 0.2292309105, 0.3855732679, -0.0587706454, 0.1195991635, 0.1655028313, 0.1151799485, 0.0343651958, -0.0607217997, 0.0925409943, -0.2345780879, 0.1303253174, -0.0077871601, -0.1314569712, -0.0...
https://github.com/huggingface/datasets/issues/3644
Add a GROUP BY operator
Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though) We use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately. I just drafted wh...
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datas...
271
Add a GROUP BY operator **Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("in...
[ -0.3625958264, -0.015419689, -0.1482399404, -0.2767421007, -0.306969136, 0.2242254764, 0.212027669, 0.2932225466, -0.015143075, 0.0763395429, -0.047383897, 0.4462049901, -0.1421684325, 0.3755691648, -0.0427690893, -0.2454422712, -0.0339607187, 0.3902627826, 0.0945014656, 0.1445...
https://github.com/huggingface/datasets/issues/3644
Add a GROUP BY operator
@lhoestq As of PyArrow 7.0.0, `pa.Table` has the [`group_by` method](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.group_by), so we should also consider using that function for grouping.
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datas...
20
Add a GROUP BY operator **Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("in...
[ -0.3625958264, -0.015419689, -0.1482399404, -0.2767421007, -0.306969136, 0.2242254764, 0.212027669, 0.2932225466, -0.015143075, 0.0763395429, -0.047383897, 0.4462049901, -0.1421684325, 0.3755691648, -0.0427690893, -0.2454422712, -0.0339607187, 0.3902627826, 0.0945014656, 0.1445...
https://github.com/huggingface/datasets/issues/3639
same value of precision, recall, f1 score at each epoch for classification task.
Hi @Dhanachandra, We have tests for all our metrics and they work as expected: under the hood, we use scikit-learn implementations. Maybe the cause is somewhere else. For example: - Is it a binary or a multiclass or a multilabel classification? Default computation of these metrics is for binary classification; ...
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:3...
398
same value of precision, recall, f1 score at each epoch for classification task. **1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache...
[ -0.3630811572, -0.5349163413, -0.1348887831, 0.3408069611, 0.3478321135, -0.115978837, 0.0651125535, 0.0968724936, 0.1133211255, 0.360712409, -0.2553243041, 0.0797411352, -0.0859257504, 0.1978702992, -0.3468482494, -0.0864400268, -0.1220595092, 0.0709546506, -0.1726451367, -0.3...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
This issue was original reported at https://github.com/huggingface/transformers/issues/14931 and It seems like this issue also occur with other AutoClass like AutoFeatureExtractor.
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
20
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
Thanks for moving the issue here ! I wasn't able to reproduce the issue on my env (the hashes stay the same): ``` - `transformers` version: 1.15.0 - `tokenizers` version: 0.10.3 - `datasets` version: 1.18.1 - `dill` version: 0.3.4 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python versi...
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
106
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
I found the issue: the tokenizer has something inside it that changes. Before the call, `tokenizer._tokenizer.truncation` is None, and after the call it changes to this for some reason: ``` {'max_length': 512, 'strategy': 'longest_first', 'stride': 0} ``` Does anybody know why calling the tokenizer would chang...
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
56
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
`tokenizer.encode(..)` does not accept argument like max_length, strategy or stride. In `tokenizers` you have to modify the tokenizer state by setting various `TruncationParams` (and/or `PaddingParams`). However, since this is modifying the state, you need to mutably borrow the tokenizer (a rust concept). The key p...
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
271
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
Thanks a lot for the explanation ! I think if we set these 2 dicts at initialization time it would be amazing already Shall we open an issue in `transformers` to ask for these dictionaries to be set when the tokenizer is instantiated ? > Edit: Another option would be to override the default hash function, but I ...
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
127
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
A hack we could have in the `datasets` lib would be to call the tokenizer before hashing it in order to set all its parameters correctly - but it sounds a lot like a hack and I'm not sure this can work in the long run
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
46
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
Fully agree with everything you said. I think the best course of action is creating an issue in `transformers`. I can start the work on this. I think the code changes are fairly simple. Making a sound test + not breaking other stuff might be different :D
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
47
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
It should be noted that this problem also occurs in other AutoClasses, such as AutoFeatureExtractor, so I don't think handling it in Datasets is a long-term practice either.
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
28
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
> I think the best course of action is creating an issue in `transformers`. I can start the work on this. @Narsil Hi, I reopen this issue in `transformers` https://github.com/huggingface/transformers/issues/14931
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
30
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3638
AutoTokenizer hash value got change after datasets.map
Here is @Narsil comment from https://github.com/huggingface/transformers/issues/14931#issuecomment-1074981569 > # TL;DR > Call the function once on a dummy example beforehand will fix it. > > ```python > tokenizer("Some", "test", truncation=True) > ``` > > # Long answer > If I remember the last status, it's ...
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
213
AutoTokenizer hash value got change after datasets.map ## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import l...
[ -0.1290391535, -0.2237773985, 0.0150259212, 0.3089294434, 0.1342907846, -0.1396988183, 0.2722432613, -0.021370586, 0.1214970127, 0.1917950958, -0.1113676727, 0.4286392033, -0.0865155905, -0.1164641976, -0.0932685584, 0.2705231905, 0.1173470244, 0.2221373767, -0.1098648906, -0.1...
https://github.com/huggingface/datasets/issues/3637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
Hi @lewtun! This one was tricky to debug. Initially, I tought there is a bug in the recently-added (by @lhoestq ) `cast_array_to_feature` function because `git bisect` points to the https://github.com/huggingface/datasets/commit/6ca96c707502e0689f9b58d94f46d871fa5a3c9c commit. Then, I noticed that the feature tpye ...
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master...
197
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18 ## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset wit...
[ -0.1942696422, -0.4909289777, 0.0600804053, 0.5314733982, 0.4639725685, 0.154937163, 0.2992588878, 0.3703401089, 0.187876448, -0.0143709239, 0.1101715788, 0.5128641725, -0.327683568, 0.1034817025, 0.0563642904, -0.277135551, 0.2133808136, 0.0304359533, -0.1236652806, 0.05714205...
https://github.com/huggingface/datasets/issues/3637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
Hey @mariosasko, thank you so much for figuring this one out - it certainly looks like a tricky bug 😱 ! I don't think there's a specific reason to use `list` instead of `Sequence` with the script, but I'll let the dataset creators know to see if your suggestion is acceptable. Thank you again!
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master...
54
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18 ## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset wit...
[ -0.1942696422, -0.4909289777, 0.0600804053, 0.5314733982, 0.4639725685, 0.154937163, 0.2992588878, 0.3703401089, 0.187876448, -0.0143709239, 0.1101715788, 0.5128641725, -0.327683568, 0.1034817025, 0.0563642904, -0.277135551, 0.2133808136, 0.0304359533, -0.1236652806, 0.05714205...
https://github.com/huggingface/datasets/issues/3637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
Thanks, this was indeed the fix! Would it make sense to produce a more informative error message in such cases? The issue can be closed.
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master...
25
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18 ## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset wit...
[ -0.1942696422, -0.4909289777, 0.0600804053, 0.5314733982, 0.4639725685, 0.154937163, 0.2992588878, 0.3703401089, 0.187876448, -0.0143709239, 0.1101715788, 0.5128641725, -0.327683568, 0.1034817025, 0.0563642904, -0.277135551, 0.2133808136, 0.0304359533, -0.1236652806, 0.05714205...
https://github.com/huggingface/datasets/issues/3634
Dataset.shuffle(seed=None) gives fixed row permutation
I'm not sure if this is expected behavior. Am I supposed to work with a copy of the dataset, i.e. `shuffled_dataset = data.shuffle(seed=None)`? ```diff import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) +shuffled_d...
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work...
158
Dataset.shuffle(seed=None) gives fixed row permutation ## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5],...
[ 0.2256516367, -0.2673810124, 0.0401451588, 0.1096486673, 0.2171934992, 0.0051689851, 0.5835940242, -0.0988347903, -0.2688475251, 0.4067517519, 0.0916137397, 0.4485055506, -0.1216275916, 0.30659163, 0.1507817656, 0.1546613276, 0.3101200461, 0.0387729742, -0.0948881954, -0.262906...