Dataset Viewer
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | draft
float64 | pull_request
dict | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7699
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7699/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7699/events
|
https://github.com/huggingface/datasets/issues/7699
| 3,261,053,171 |
I_kwDODunzps7CX7jz
| 7,699 |
Broken link in documentation for "Create a video dataset"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-24T19:46:28 | 2025-07-24T19:46:28 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7699/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7698
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7698/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7698/events
|
https://github.com/huggingface/datasets/issues/7698
| 3,255,350,916 |
I_kwDODunzps7CCLaE
| 7,698 |
NotImplementedError when using streaming=True in Google Colab environment
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
"events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
"followers_url": "https://api.github.com/users/Aniket17200/followers",
"following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aniket17200",
"id": 100470741,
"login": "Aniket17200",
"node_id": "U_kgDOBf0P1Q",
"organizations_url": "https://api.github.com/users/Aniket17200/orgs",
"received_events_url": "https://api.github.com/users/Aniket17200/received_events",
"repos_url": "https://api.github.com/users/Aniket17200/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aniket17200",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
"Thank you @tanuj-rai, it's working great "
] | 2025-07-23T08:04:53 | 2025-07-23T15:06:23 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session.
### Steps to reproduce the bug
Open a new Google Colab notebook.
(Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime.
Run the following code:
Python
from datasets import load_dataset
try:
print("Attempting to load a stream...")
streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True)
print("Success!")
except Exception as e:
print(e)
### Expected behavior
The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset.
Actual Behavior
The code fails and prints the following error traceback:
[PASTE THE FULL ERROR TRACEBACK HERE]
(Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.)
### Environment info
Platform: Google Colab
datasets version: [Run !pip show datasets in Colab and paste the version here]
huggingface_hub version: [Run !pip show huggingface_hub and paste the version here]
Python version: [Run !python --version and paste the version here]
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7698/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7697
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7697/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7697/events
|
https://github.com/huggingface/datasets/issues/7697
| 3,254,526,399 |
I_kwDODunzps7B_CG_
| 7,697 |
-
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44517413?v=4",
"events_url": "https://api.github.com/users/kakamond/events{/privacy}",
"followers_url": "https://api.github.com/users/kakamond/followers",
"following_url": "https://api.github.com/users/kakamond/following{/other_user}",
"gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kakamond",
"id": 44517413,
"login": "kakamond",
"node_id": "MDQ6VXNlcjQ0NTE3NDEz",
"organizations_url": "https://api.github.com/users/kakamond/orgs",
"received_events_url": "https://api.github.com/users/kakamond/received_events",
"repos_url": "https://api.github.com/users/kakamond/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kakamond",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-23T01:30:32 | 2025-07-24T04:17:35 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
-
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7697/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7696
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7696/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7696/events
|
https://github.com/huggingface/datasets/issues/7696
| 3,253,433,350 |
I_kwDODunzps7B63QG
| 7,696 |
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-22T17:02:17 | 2025-07-22T17:03:24 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(24000))
sample= ds[0]["audio"]["array"]
print(sample)
# sample in 3.6.0
[0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325]
# sample in 4.0.0
array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863,
0.00115309], dtype=float32)
```
### Expected behavior
The same dataset should load identical samples across versions to maintain reproducibility.
### Environment info
First env:
- datasets version: 3.6.0
- Platform: Windows-10-10.0.26100-SP0
- Python: 3.11.0
Second env:
- datasets version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python: 3.11.13
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7696/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7695
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7695/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7695/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7695/events
|
https://github.com/huggingface/datasets/pull/7695
| 3,251,904,843 |
PR_kwDODunzps6gB7jS
| 7,695 |
Support downloading specific splits in load_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"I’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now correctly replace JJJJJ and SSSSS placeholders in the fpath for job/shard IDs before creating the writer.\r\n\r\n- Added os.makedirs(os.path.dirname(path), exist_ok=True) after placeholder substitution to prevent FileNotFoundError.\r\n\r\n- Applied the fix to both split writers:\r\n\r\n 1] self._generate_examples version (used by most modules).\r\n\r\n 2] self._generate_tables version (used by IterableDatasetBuilder).\r\n\r\n- Confirmed 109/113 tests passing, meaning the general logic is working across the board.\r\n\r\nWhat’s still failing\r\n4 integration tests fail:\r\n\r\n`test_load_hub_dataset_with_single_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_two_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_metadata_config_in_parallel`\r\n\r\n`test_reload_old_cache_from_2_15`\r\n\r\nAll are due to FileNotFoundError from uncreated output paths, which I'm currently finalizing by ensuring os.makedirs() is correctly applied before every writer instantiation.\r\n\r\nI will update about these fixes after running tests!",
"@lhoestq this was just an update"
] | 2025-07-22T09:33:54 | 2025-07-24T07:17:57 | null |
CONTRIBUTOR
| null | null | null |
This PR builds on #6832 by @mariosasko.
May close - #4101, #2538
Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
---
### Note - This PR is under work and frequent changes will be pushed.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7695/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7695/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7695.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7695",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7695.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7695"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7694
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7694/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7694/events
|
https://github.com/huggingface/datasets/issues/7694
| 3,247,600,408 |
I_kwDODunzps7BknMY
| 7,694 |
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4",
"events_url": "https://api.github.com/users/ycq0125/events{/privacy}",
"followers_url": "https://api.github.com/users/ycq0125/followers",
"following_url": "https://api.github.com/users/ycq0125/following{/other_user}",
"gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ycq0125",
"id": 49603999,
"login": "ycq0125",
"node_id": "MDQ6VXNlcjQ5NjAzOTk5",
"organizations_url": "https://api.github.com/users/ycq0125/orgs",
"received_events_url": "https://api.github.com/users/ycq0125/received_events",
"repos_url": "https://api.github.com/users/ycq0125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ycq0125",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-21T07:51:25 | 2025-07-21T07:51:25 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike
def run_test():
"""Loads data into memory and then saves it, triggering the memory issue."""
logger.info("Step 1: Loading data into an in-memory Dataset object...")
# Create an in-memory Dataset object from a stream
# This simulates having a processed dataset ready to be saved
iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
output_path = "./test_output.jsonl"
logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
logger.info("Please monitor memory usage during this step.")
# This is the step that causes the massive memory allocation
in_memory_dataset.to_json(output_path, force_ascii=False)
logger.info("Save operation complete.")
os.remove(output_path)
if __name__ == "__main__":
# To see the memory usage clearly, run this script with a memory profiler:
# python -m memray run your_script_name.py
# python -m memray tree xxx.bin
run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7694/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7693
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7693/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7693/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7693/events
|
https://github.com/huggingface/datasets/issues/7693
| 3,246,369,678 |
I_kwDODunzps7Bf6uO
| 7,693 |
Dataset scripts are no longer supported, but found superb.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4",
"events_url": "https://api.github.com/users/edwinzajac/events{/privacy}",
"followers_url": "https://api.github.com/users/edwinzajac/followers",
"following_url": "https://api.github.com/users/edwinzajac/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinzajac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/edwinzajac",
"id": 114297534,
"login": "edwinzajac",
"node_id": "U_kgDOBtAKvg",
"organizations_url": "https://api.github.com/users/edwinzajac/orgs",
"received_events_url": "https://api.github.com/users/edwinzajac/received_events",
"repos_url": "https://api.github.com/users/edwinzajac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/edwinzajac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinzajac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/edwinzajac",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)\n\nRuntimeError: Dataset scripts are no longer supported, but found librispeech_asr.py\n\nWhat am I supposed to do at this point?\n\nThanks",
"hey I got the same error and I have tried to downgrade version to 3.6.0 and it works.\n`pip install datasets==3.6.0`",
"Thank you very much @Tin-viAct . That indeed did the trick for me :) \nNow the code continue its normal flow ",
"Thanks @Tin-viAct, Works!"
] | 2025-07-20T13:48:06 | 2025-07-24T14:25:36 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1)
----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test")
3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
5 for out in tqdm(pipe(KeyDataset(dataset, "file"))):
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...) 987 proxies=download_config.proxies,
988 )
--> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found superb.py
```
NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something.
### Steps to reproduce the bug
```
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
### Expected behavior
Get the tutorial expected results
### Environment info
--- SYSTEM INFO ---
Operating System: Ubuntu 24.10
Kernel: Linux 6.11.0-29-generic
Architecture: x86-64
--- PYTHON ---
Python 3.11.13
--- VENV INFO ----
datasets=4.0.0
transformers=4.53
tqdm=4.67.1
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7693/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7692
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7692/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7692/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7692/events
|
https://github.com/huggingface/datasets/issues/7692
| 3,246,268,635 |
I_kwDODunzps7BfiDb
| 7,692 |
xopen: invalid start byte for streaming dataset with trust_remote_code=True
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sedol1339",
"id": 5188731,
"login": "sedol1339",
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sedol1339",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-20T11:08:20 | 2025-07-20T11:08:20 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte`
The cause of the error is the following:
```
from datasets.utils.file_utils import xopen
filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json'
xopen(filepath, 'r').read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
And the cause of this is the following:
```
import fsspec
fsspec.open(
'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json',
mode='r',
hf={'token': None, 'endpoint': 'https://huggingface.co'},
).open().read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility.
### Steps to reproduce the bug
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True)))
```
### Expected behavior
No errors expected
### Environment info
datasets==3.6.0, ubuntu 24.04
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7692/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7691
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7691/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7691/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7691/events
|
https://github.com/huggingface/datasets/issues/7691
| 3,245,547,170 |
I_kwDODunzps7Bcx6i
| 7,691 |
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to just set it as \"binary\" type, perhaps?",
"I also tried creating a dataset_info.json but the webdataset builder didn't seem to look for it and load it",
"Workaround on my end, removed all videos larger than 2GB for now. The dataset no longer crashes."
] | 2025-07-19T18:40:27 | 2025-07-21T19:17:33 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True
```
ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train")
```
This gives:
```
File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist
File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist
File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays
File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992
```
### Steps to reproduce the bug
```python
#!/usr/bin/env python
import argparse
from datasets import get_dataset_config_names, load_dataset
from tqdm import tqdm
from pyarrow.lib import ArrowCapacityError, ArrowInvalid
def iterate_keys(language_subset: str) -> None:
"""Iterate over all samples in the Sign Bibles dataset and print idx and sample key."""
# https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
print(f"\n==> Loaded dataset config '{language_subset}'")
idx = 0
estimated_shard_index = 0
samples_per_shard = 5
with tqdm(desc=f"{language_subset} samples") as pbar:
iterator = iter(ds)
while True:
try:
if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4
print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}")
estimated_shard_index += 1
sample = next(iterator)
sample_key = sample.get("__key__", "missing-key")
print(f"[{language_subset}] idx={idx}, key={sample_key}")
idx += 1
pbar.update(1)
except StopIteration:
print(f"Finished iterating through {idx} samples of {language_subset}")
break
except (ArrowCapacityError, ArrowInvalid) as e:
print(f"PyArrow error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
except KeyError as e:
print(f"Missing key error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
def main():
configs = get_dataset_config_names("bible-nlp/sign-bibles")
print(f"Available configs: {configs}")
configs = [
"ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard"
]
for language_subset in configs:
print(f"TESTING CONFIG {language_subset}")
iterate_keys(language_subset)
# try:
# except (ArrowCapacityError, ArrowInvalid) as e:
# print(f"PyArrow error at config level for {language_subset}: {e}")
# continue
# except RuntimeError as e:
# print(f"RuntimeError at config level for {language_subset}: {e}")
# continue
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.")
args = parser.parse_args()
main()
```
### Expected behavior
I expect, when I load with streaming=True, that there should not be any data loaded or anything like that.
https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true,
I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"]
>In the streaming case:
> Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it.
### Environment info
Local setup: Conda environment on Ubuntu, pip list includes the following
datasets 4.0.0
pyarrow 20.0.0
Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7691/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7690
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7690/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7690/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7690/events
|
https://github.com/huggingface/datasets/pull/7690
| 3,244,380,691 |
PR_kwDODunzps6fozag
| 7,690 |
HDF5 support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"@lhoestq This is ready for review now. Note that it doesn't support *all* HDF5 files (and I don't think that's worth attempting)... the biggest assumption is that the first dimension of each dataset corresponds to rows in the split.",
"A few to-dos which I think can be left for future PRs (which I am happy to do/help with -- just this one is already huge 😄 ):\r\n- [Enum types](https://docs.h5py.org/en/stable/special.html#enumerated-types)\r\n- HDF5 [io](https://github.com/huggingface/datasets/tree/main/src/datasets/io)\r\n- [dataset-viewer](https://github.com/huggingface/dataset-viewer) support (not sure if changes are needed with the way it is written now)"
] | 2025-07-18T21:09:41 | 2025-07-24T20:31:36 | null |
NONE
| null | null | null |
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by splitting them into several columns. All datasets within the HDF5 file should have rows on the first dimension (groups/subgroups are still allowed). Closes #3113.
Replaces #7625 which only supports a relatively small subset of HDF5.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7690/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7690/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7690.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7690",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7690.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7690"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7689
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7689/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7689/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7689/events
|
https://github.com/huggingface/datasets/issues/7689
| 3,242,580,301 |
I_kwDODunzps7BRdlN
| 7,689 |
BadRequestError for loading dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45011687?v=4",
"events_url": "https://api.github.com/users/WPoelman/events{/privacy}",
"followers_url": "https://api.github.com/users/WPoelman/followers",
"following_url": "https://api.github.com/users/WPoelman/following{/other_user}",
"gists_url": "https://api.github.com/users/WPoelman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WPoelman",
"id": 45011687,
"login": "WPoelman",
"node_id": "MDQ6VXNlcjQ1MDExNjg3",
"organizations_url": "https://api.github.com/users/WPoelman/orgs",
"received_events_url": "https://api.github.com/users/WPoelman/received_events",
"repos_url": "https://api.github.com/users/WPoelman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WPoelman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WPoelman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WPoelman",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working successfully yesterday - I'm using `huggingface-hub==0.27.1`, `datasets==3.2.0`",
"Same, here with `datasets==3.6.0`",
"Same, with `datasets==4.0.0`.",
"Same here tried different versions of huggingface-hub and datasets but the error keeps occuring ",
"A temporary workaround is to first download your dataset with\n\nhuggingface-cli download HuggingFaceH4/ultrachat_200k --repo-type dataset\n\nThen find the local path of the dataset typically like ~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\n\nAnd then load like \n\nfrom datasets import load_dataset\ndataset = load_dataset(\"~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\")\n",
"I am also experiencing this issue. I was trying to load TinyStories\nds = datasets.load_dataset(\"roneneldan/TinyStories\", streaming=True, split=\"train\")\n\nresulting in the previously stated error:\nException has occurred: BadRequestError\n(Request ID: Root=1-687a1d09-66cceb496c9401b1084133d6;3550deed-c459-4799-bc74-97924742bd94)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\nFileNotFoundError: Dataset roneneldan/TinyStories is not cached in None\n\nThis very code worked fine yesterday, so it's a very recent issue.\n\nEnvironment info:\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"pyarrow version:\", pyarrow.__version__)\nprint(\"pandas version:\", pandas.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\nprint(\"Python version:\", sys.version)\nprint(\"Platform:\", platform.platform())\ndatasets version: 4.0.0\nhuggingface_hub version: 0.33.4\npyarrow version: 19.0.0\npandas version: 2.2.3\nfsspec version: 2024.9.0\nPython version: 3.12.11 (main, Jun 10 2025, 11:55:20) [GCC 15.1.1 20250425]\nPlatform: Linux-6.15.6-arch1-1-x86_64-with-glibc2.41",
"Same here with datasets==3.6.0\n```\nhuggingface_hub.errors.BadRequestError: (Request ID: Root=1-687a238d-27374f964534f79f702bc239;61f0669c-cb70-4aff-b57b-73a446f9c65e)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\n```",
"Same here, works perfectly yesterday\n\n```\nError code: ConfigNamesError\nException: BadRequestError\nMessage: (Request ID: Root=1-687a23a5-314b45b36ce962cf0e431b9a;b979ddb2-a80b-483c-8b1e-403e24e83127)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\n```",
"It was literally working for me and then suddenly it stopped working next time I run the command. Same issue but private repo so I can't share example. ",
"A bug from Hugging Face not us",
"Same here!",
"@LMSPaul thanks! The workaround seems to work (at least for the datasets I tested).\n\nOn the command line:\n```sh\nhuggingface-cli download <dataset-name> --repo-type dataset --local-dir <local-dir>\n```\n\nAnd then in Python:\n```python\nfrom datasets import load_dataset\n\n# The dataset-specific options seem to work with this as well, \n# except for a warning from \"trust_remote_code\"\nds = load_dataset(<local-dir>)\n```",
"Same for me.. I couldn't load ..\nIt was perfectly working yesterday..\n\n\nfrom datasets import load_dataset\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\n\nThe error resulting is given below\n\n---------------------------------------------------------------------------\nBadRequestError Traceback (most recent call last)\n/tmp/ipykernel_60/772458687.py in <cell line: 0>()\n 1 from datasets import load_dataset\n----> 2 raw_datasets = load_dataset(\"glue\", \"mrpc\")\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\n 2060 \n 2061 # Create a dataset builder\n-> 2062 builder_instance = load_dataset_builder(\n 2063 path=path,\n 2064 name=name,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\n 1780 download_config = download_config.copy() if download_config else DownloadConfig()\n 1781 download_config.storage_options.update(storage_options)\n-> 1782 dataset_module = dataset_module_factory(\n 1783 path,\n 1784 revision=revision,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1662 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\n 1663 ) from None\n-> 1664 raise e1 from None\n 1665 elif trust_remote_code:\n 1666 raise FileNotFoundError(\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1627 download_mode=download_mode,\n 1628 use_exported_dataset_infos=use_exported_dataset_infos,\n-> 1629 ).get_module()\n 1630 except GatedRepoError as e:\n 1631 message = f\"Dataset '{path}' is a gated dataset on the Hub.\"\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in get_module(self)\n 1017 else:\n 1018 patterns = get_data_patterns(base_path, download_config=self.download_config)\n-> 1019 data_files = DataFilesDict.from_patterns(\n 1020 patterns,\n 1021 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 687 patterns_for_key\n 688 if isinstance(patterns_for_key, DataFilesList)\n--> 689 else DataFilesList.from_patterns(\n 690 patterns_for_key,\n 691 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 580 try:\n 581 data_files.extend(\n--> 582 resolve_pattern(\n 583 pattern,\n 584 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in resolve_pattern(pattern, base_path, allowed_extensions, download_config)\n 358 matched_paths = [\n 359 filepath if filepath.startswith(protocol_prefix) else protocol_prefix + filepath\n--> 360 for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()\n 361 if (info[\"type\"] == \"file\" or (info.get(\"islink\") and os.path.isfile(os.path.realpath(filepath))))\n 362 and (xbasename(filepath) not in files_to_ignore)\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in glob(self, path, **kwargs)\n 519 kwargs = {\"expand_info\": kwargs.get(\"detail\", False), **kwargs}\n 520 path = self.resolve_path(path, revision=kwargs.get(\"revision\")).unresolve()\n--> 521 return super().glob(path, **kwargs)\n 522 \n 523 def find(\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in glob(self, path, maxdepth, **kwargs)\n 635 # any exception allowed bar FileNotFoundError?\n 636 return False\n--> 637 \n 638 def lexists(self, path, **kwargs):\n 639 \"\"\"If there is a file at the given path (including\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in find(self, path, maxdepth, withdirs, detail, refresh, revision, **kwargs)\n 554 \"\"\"\n 555 if maxdepth:\n--> 556 return super().find(\n 557 path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, refresh=refresh, revision=revision, **kwargs\n 558 )\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in find(self, path, maxdepth, withdirs, detail, **kwargs)\n 498 # This is needed for posix glob compliance\n 499 if withdirs and path != \"\" and self.isdir(path):\n--> 500 out[path] = self.info(path)\n 501 \n 502 for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in info(self, path, refresh, revision, **kwargs)\n 717 out = out1[0]\n 718 if refresh or out is None or (expand_info and out and out[\"last_commit\"] is None):\n--> 719 paths_info = self._api.get_paths_info(\n 720 resolved_path.repo_id,\n 721 resolved_path.path_in_repo,\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\n 113 \n--> 114 return fn(*args, **kwargs)\n 115 \n 116 return _inner_fn # type: ignore\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_api.py in get_paths_info(self, repo_id, paths, expand, revision, repo_type, token)\n 3397 headers=headers,\n 3398 )\n-> 3399 hf_raise_for_status(response)\n 3400 paths_info = response.json()\n 3401 return [\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py in hf_raise_for_status(response, endpoint_name)\n 463 f\"\\n\\nBad request for {endpoint_name} endpoint:\" if endpoint_name is not None else \"\\n\\nBad request:\"\n 464 )\n--> 465 raise _format(BadRequestError, message, response) from e\n 466 \n 467 elif response.status_code == 403:\n\nBadRequestError: (Request ID: Root=1-687a3201-087954b9245ab59672e6068e;d5bb4dbe-03e1-4912-bcec-5964c017b920)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, re",
"Thanks for the report!\nThe issue has been fixed and should now work without any code changes 😄\nSorry for the inconvenience!\n\nClosing, please open again if needed.",
"Works for me. Thanks!\n",
"Yes Now it's works for me..Thanks\r\n\r\nOn Fri, 18 Jul 2025, 5:25 pm Karol Brejna, ***@***.***> wrote:\r\n\r\n> *karol-brejna-i* left a comment (huggingface/datasets#7689)\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>\r\n>\r\n> Works for me. Thanks!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AJRBXNEWBJ5UYVC2IRJM5DD3JDODZAVCNFSM6AAAAACB2FDG4GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAOBZGIZTQMZSGA>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 2025-07-18T09:30:04 | 2025-07-18T11:59:51 | 2025-07-18T11:52:29 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand
✖ Invalid input: expected array, received string
→ at paths
✖ Invalid input: expected boolean, received string
→ at expand
```
I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.
What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.
### Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"]
```
### Expected behavior
That the dataset loads as it did a couple days ago.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego",
"user_view_type": "public"
}
|
{
"+1": 23,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7689/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7689/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7688
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7688/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7688/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7688/events
|
https://github.com/huggingface/datasets/issues/7688
| 3,238,851,443 |
I_kwDODunzps7BDPNz
| 7,688 |
No module named "distributed"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45058324?v=4",
"events_url": "https://api.github.com/users/yingtongxiong/events{/privacy}",
"followers_url": "https://api.github.com/users/yingtongxiong/followers",
"following_url": "https://api.github.com/users/yingtongxiong/following{/other_user}",
"gists_url": "https://api.github.com/users/yingtongxiong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yingtongxiong",
"id": 45058324,
"login": "yingtongxiong",
"node_id": "MDQ6VXNlcjQ1MDU4MzI0",
"organizations_url": "https://api.github.com/users/yingtongxiong/orgs",
"received_events_url": "https://api.github.com/users/yingtongxiong/received_events",
"repos_url": "https://api.github.com/users/yingtongxiong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yingtongxiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yingtongxiong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yingtongxiong",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to version 2.14.6 : ! pip install datasets==2.14.6\n"
] | 2025-07-17T09:32:35 | 2025-07-21T13:50:27 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.distributed import split_dataset_by_node
### Expected behavior
expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully
### Environment info
python: 3.12
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7688/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7688/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7687
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7687/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7687/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7687/events
|
https://github.com/huggingface/datasets/issues/7687
| 3,238,760,301 |
I_kwDODunzps7BC49t
| 7,687 |
Datasets keeps rebuilding the dataset every time i call the python script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/58883113?v=4",
"events_url": "https://api.github.com/users/CALEB789/events{/privacy}",
"followers_url": "https://api.github.com/users/CALEB789/followers",
"following_url": "https://api.github.com/users/CALEB789/following{/other_user}",
"gists_url": "https://api.github.com/users/CALEB789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CALEB789",
"id": 58883113,
"login": "CALEB789",
"node_id": "MDQ6VXNlcjU4ODgzMTEz",
"organizations_url": "https://api.github.com/users/CALEB789/orgs",
"received_events_url": "https://api.github.com/users/CALEB789/received_events",
"repos_url": "https://api.github.com/users/CALEB789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CALEB789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CALEB789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CALEB789",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-17T09:03:38 | 2025-07-17T09:03:38 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Every time it runs, somehow, samples increase.
This can cause a 12mb dataset to have other built versions of 400 mbs+
<img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" />
### Steps to reproduce the bug
`from datasets import load_dataset
s = load_dataset('~/.cache/huggingface/datasets/databricks___databricks-dolly-15k')['train']
`
1. A dataset needs to be available in the .cache folder
2. Run the code multiple times, and every time it runs, more versions are created
### Expected behavior
The number of samples increases every time the script runs
### Environment info
- `datasets` version: 3.6.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.13.3
- `huggingface_hub` version: 0.32.3
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7687/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7687/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7686
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7686/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7686/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7686/events
|
https://github.com/huggingface/datasets/issues/7686
| 3,237,201,090 |
I_kwDODunzps7A88TC
| 7,686 |
load_dataset does not check .no_exist files in the hub cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3627235?v=4",
"events_url": "https://api.github.com/users/jmaccarl/events{/privacy}",
"followers_url": "https://api.github.com/users/jmaccarl/followers",
"following_url": "https://api.github.com/users/jmaccarl/following{/other_user}",
"gists_url": "https://api.github.com/users/jmaccarl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmaccarl",
"id": 3627235,
"login": "jmaccarl",
"node_id": "MDQ6VXNlcjM2MjcyMzU=",
"organizations_url": "https://api.github.com/users/jmaccarl/orgs",
"received_events_url": "https://api.github.com/users/jmaccarl/received_events",
"repos_url": "https://api.github.com/users/jmaccarl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmaccarl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmaccarl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmaccarl",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-16T20:04:00 | 2025-07-16T20:04:00 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack.
The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wrapper APIs that do. This is because the `utils.file_utils.cached_path` used directly calls `hf_hub_download` instead of using `file_download.try_to_load_from_cache` from `huggingface_hub` (see `transformers` library `utils.hub.cached_files` for one alternate example).
This results in unnecessary metadata HTTP requests occurring for files that don't exist on every call. It won't generate the .no_exist cache files, nor will it use them.
### Steps to reproduce the bug
Run the following snippet as one example (setting cache dirs to clean paths for clarity)
`env HF_HOME=~/local_hf_hub python repro.py`
```
from datasets import load_dataset
import huggingface_hub
# monkeypatch to print out metadata requests being made
original_get_hf_file_metadata = huggingface_hub.file_download.get_hf_file_metadata
def get_hf_file_metadata_wrapper(*args, **kwargs):
print("File metadata request made (get_hf_file_metadata):", args, kwargs)
return original_get_hf_file_metadata(*args, **kwargs)
# Apply the patch
huggingface_hub.file_download.get_hf_file_metadata = get_hf_file_metadata_wrapper
dataset = load_dataset(
"Salesforce/wikitext",
"wikitext-2-v1",
split="test",
trust_remote_code=True,
cache_dir="~/local_datasets",
revision="b08601e04326c79dfdd32d625aee71d232d685c3",
)
```
This may be called over and over again, and you will see the same calls for files that don't exist:
```
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/wikitext.py', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/.huggingface.yaml', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/dataset_infos.json', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
```
And you can see that the .no_exist folder is never created
```
$ ls ~/local_hf_hub/hub/datasets--Salesforce--wikitext/
blobs refs snapshots
```
### Expected behavior
The expected behavior is for the print "File metadata request made" to stop after the first call, and for .no_exist directory & files to be populated under ~/local_hf_hub/hub/datasets--Salesforce--wikitext/
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.5.13-65-650-4141-22041-coreweave-amd64-85c45edc-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2024.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7686/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7686/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7685
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7685/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7685/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7685/events
|
https://github.com/huggingface/datasets/issues/7685
| 3,236,979,340 |
I_kwDODunzps7A8GKM
| 7,685 |
Inconsistent range request behavior for parquet REST api
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21327470?v=4",
"events_url": "https://api.github.com/users/universalmind303/events{/privacy}",
"followers_url": "https://api.github.com/users/universalmind303/followers",
"following_url": "https://api.github.com/users/universalmind303/following{/other_user}",
"gists_url": "https://api.github.com/users/universalmind303/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/universalmind303",
"id": 21327470,
"login": "universalmind303",
"node_id": "MDQ6VXNlcjIxMzI3NDcw",
"organizations_url": "https://api.github.com/users/universalmind303/orgs",
"received_events_url": "https://api.github.com/users/universalmind303/received_events",
"repos_url": "https://api.github.com/users/universalmind303/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/universalmind303/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/universalmind303/subscriptions",
"type": "User",
"url": "https://api.github.com/users/universalmind303",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-16T18:39:44 | 2025-07-16T18:41:53 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere.
The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. More often than not, I am seeing 416, but other times for an identical request, it gives me the data along with `206 Partial Content` as expected.
### Steps to reproduce the bug
repeating this request multiple times will return either 416 or 206.
```sh
$ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
```
Note: this is not limited to just the above file, I tried with many different datasets and am able to consistently reproduce issue across multiple datasets.
when the 416 is returned, I get the following headers
```
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:43 GMT
< expires: Wed, 16 Jul 2025 14:58:43 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront)
<
```
this suggests to me that there is likely a CDN/caching/routing issue happening and the request is not getting routed properly.
Full verbose output via curl.
<details>
❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:41 GMT
< expires: Wed, 16 Jul 2025 14:58:41 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 e2f1bed2f82641d6d5439eac20a790ba.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: Mo8hn-EZLJqE_hoBday8DdhmVXhV3v9-Wg-EEHI6gX_fNlkanVIUBA==
<
{ [49 bytes data]
100 49 100 49 0 0 2215 0 --:--:-- --:--:-- --:--:-- 2227
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:42 GMT
< expires: Wed, 16 Jul 2025 14:58:42 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 bb352451e1eacf85f8786ee3ecd07eca.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: 9xy-CX9KvlS8Ye4eFr8jXMDobZHFkvdyvkLJGmK_qiwZQywCCwfq7Q==
<
{ [49 bytes data]
100 49 100 49 0 0 2381 0 --:--:-- --:--:-- --:--:-- 2450
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:43 GMT
< expires: Wed, 16 Jul 2025 14:58:43 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: wtBgwY4u4YJ2pD1ovM8UV770UiJoqWfs7i7VzschDyoLv5g7swGGmw==
<
{ [49 bytes data]
100 49 100 49 0 0 2273 0 --:--:-- --:--:-- --:--:-- 2333
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 302
< content-type: text/plain; charset=utf-8
< content-length: 177
< location: https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet
< date: Wed, 16 Jul 2025 14:58:44 GMT
< x-powered-by: huggingface-moon
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< x-request-id: Root=1-6877be24-476860f03849cb1a1570c9d8
< access-control-allow-origin: https://huggingface.co
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None
< set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax
< x-cache: Miss from cloudfront
< via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: xuSi0X5RpH1OZqQOM8gGQLQLU8eOM6Gbkk-bgIX_qBnTTaa1VNkExA==
<
* Ignoring the response-body
100 177 100 177 0 0 2021 0 --:--:-- --:--:-- --:--:-- 2034
* Connection #0 to host huggingface.co left intact
* Issue another request to this URL: 'https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet'
* Found bundle for host: 0x600002d54570 [can multiplex]
* Re-using existing connection with host huggingface.co
* [HTTP/2] [3] OPENED stream for https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet
* [HTTP/2] [3] [:method: GET]
* [HTTP/2] [3] [:scheme: https]
* [HTTP/2] [3] [:authority: huggingface.co]
* [HTTP/2] [3] [:path: /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet]
* [HTTP/2] [3] [user-agent: curl/8.7.1]
* [HTTP/2] [3] [accept: */*]
* [HTTP/2] [3] [range: bytes=217875070-218006142]
> GET /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 302
< content-type: text/plain; charset=utf-8
< content-length: 1317
< location: https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC
< date: Wed, 16 Jul 2025 14:58:44 GMT
< x-powered-by: huggingface-moon
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< x-request-id: Root=1-6877be24-4f628b292dc8a7a5339c41d3
< access-control-allow-origin: https://huggingface.co
< vary: Origin, Accept
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None
< set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax
< x-repo-commit: 712df366ffbc959d9f4279bf2da579230b7ca5d8
< accept-ranges: bytes
< x-linked-size: 218006142
< x-linked-etag: "01736bf26d0046ddec4ab8900fba3f0dc6500b038314b44d0edb73a7c88dec07"
< x-xet-hash: cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9
< link: <https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/xet-read-token/712df366ffbc959d9f4279bf2da579230b7ca5d8>; rel="xet-auth", <https://cas-server.xethub.hf.co/reconstruction/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9>; rel="xet-reconstruction-info"
< x-cache: Miss from cloudfront
< via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: 0qXw2sJGrWCLVt7c-Vtn09uE3nu6CrJw9RmAKvNr_flG75muclvlIg==
<
* Ignoring the response-body
100 1317 100 1317 0 0 9268 0 --:--:-- --:--:-- --:--:-- 9268
* Connection #0 to host huggingface.co left intact
* Issue another request to this URL: 'https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC'
* Host cas-bridge.xethub.hf.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.181.55, 18.160.181.54, 18.160.181.52, 18.160.181.88
* Trying 18.160.181.55:443...
* Connected to cas-bridge.xethub.hf.co (18.160.181.55) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [328 bytes data]
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3818 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=cas-bridge.xethub.hf.co
* start date: Jun 4 00:00:00 2025 GMT
* expire date: Jul 3 23:59:59 2026 GMT
* subjectAltName: host "cas-bridge.xethub.hf.co" matched cert's "cas-bridge.xethub.hf.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M04
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: cas-bridge.xethub.hf.co]
* [HTTP/2] [1] [:path: /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC HTTP/2
> Host: cas-bridge.xethub.hf.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 206
< content-length: 131072
< date: Mon, 14 Jul 2025 08:40:28 GMT
< x-request-id: 01K041FDPVA03RR2PRXDZSN30G
< content-disposition: inline; filename*=UTF-8''0000.parquet; filename="0000.parquet";
< cache-control: public, max-age=31536000
< etag: "cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9"
< access-control-allow-origin: *
< access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag
< access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache
< x-cache: Hit from cloudfront
< via: 1.1 1c857e24a4dc84d2d9c78d5b3463bed6.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P2
< x-amz-cf-id: 3SxFmQa5wLeeXbNiwaAo0_RwoR_n7-SivjsLjDLG-Pwn5UhG2oiEQA==
< age: 195496
< content-security-policy: default-src 'none'; sandbox
< content-range: bytes 217875070-218006141/218006142
<
{ [8192 bytes data]
100 128k 100 128k 0 0 769k 0 --:--:-- --:--:-- --:--:-- 769k
* Connection #1 to host cas-bridge.xethub.hf.co left intact
</details>
### Expected behavior
always get back a `206`
### Environment info
n/a
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7685/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7685/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7684
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7684/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7684/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7684/events
|
https://github.com/huggingface/datasets/pull/7684
| 3,231,680,474 |
PR_kwDODunzps6e9SjQ
| 7,684 |
fix audio cast storage from array + sampling_rate
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-15T10:13:42 | 2025-07-15T10:24:08 | 2025-07-15T10:24:07 |
MEMBER
| null | null | null |
fix https://github.com/huggingface/datasets/issues/7682
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7684/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7684/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7684",
"merged_at": "2025-07-15T10:24:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7684"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7683
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7683/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7683/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7683/events
|
https://github.com/huggingface/datasets/pull/7683
| 3,231,553,161 |
PR_kwDODunzps6e82iW
| 7,683 |
Convert to string when needed + faster .zstd
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-15T09:37:44 | 2025-07-15T10:13:58 | 2025-07-15T10:13:56 |
MEMBER
| null | null | null |
for https://huggingface.co/datasets/allenai/olmo-mix-1124
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7683/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7683/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7683",
"merged_at": "2025-07-15T10:13:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7683"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7682
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7682/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7682/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7682/events
|
https://github.com/huggingface/datasets/issues/7682
| 3,229,687,253 |
I_kwDODunzps7AgR3V
| 7,682 |
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163345686?v=4",
"events_url": "https://api.github.com/users/luatil-cloud/events{/privacy}",
"followers_url": "https://api.github.com/users/luatil-cloud/followers",
"following_url": "https://api.github.com/users/luatil-cloud/following{/other_user}",
"gists_url": "https://api.github.com/users/luatil-cloud/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/luatil-cloud",
"id": 163345686,
"login": "luatil-cloud",
"node_id": "U_kgDOCbx1Fg",
"organizations_url": "https://api.github.com/users/luatil-cloud/orgs",
"received_events_url": "https://api.github.com/users/luatil-cloud/received_events",
"repos_url": "https://api.github.com/users/luatil-cloud/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/luatil-cloud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luatil-cloud/subscriptions",
"type": "User",
"url": "https://api.github.com/users/luatil-cloud",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"thanks for reporting, I opened a PR and I'll make a patch release soon ",
"> thanks for reporting, I opened a PR and I'll make a patch release soon\n\nThank you very much @lhoestq!"
] | 2025-07-14T18:41:02 | 2025-07-15T12:10:39 | 2025-07-15T10:24:08 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS Sequoia 15.5
```python
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "datasets[audio]==4.0.0",
# "librosa>=0.11.0",
# ]
# ///
# NAME
# create_audio_dataset.py - create an audio dataset of sine waves
#
# SYNOPSIS
# uv run create_audio_dataset.py
#
# DESCRIPTION
# Create an audio dataset using the Hugging Face [datasets] library.
# Illustrates how to create synthetic audio datasets using the [map]
# datasets function.
#
# The strategy is to first create a dataset with the input to the
# generation function, then execute the map function that generates
# the result, and finally cast the final features.
#
# BUG
# Casting features with Audio for numpy arrays -
# done here with `ds.map(gen_sine, features=features)` fails
# in version 4.0.0 but not in version 3.6.0
#
# This happens both in cases where --extra audio is provided and where is not.
# When audio is not provided i've installed the latest compatible version
# of soundfile.
#
# The error when soundfile is installed but the audio --extra is not
# indicates that the array values do not have the `.T` property,
# whilst also indicating that the value is a list instead of a numpy array.
#
# Last lines of error report when for datasets + soundfile case
# ...
#
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 239, in cast_storage
# storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
# ~~~~~~~~~~~~~~~~~~~~~~^^^
# File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 122, in encode_example
# sf.write(buffer, value["array"].T, value["sampling_rate"], format="wav")
# ^^^^^^^^^^^^^^^^
# AttributeError: 'list' object has no attribute 'T'
# ...
#
# For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg
# error.
#
# Last lines of error report:
#
# ...
# RuntimeError: Could not load libtorchcodec. Likely causes:
# 1. FFmpeg is not properly installed in your environment. We support
# versions 4, 5, 6 and 7.
# 2. The PyTorch version (2.7.1) is not compatible with
# this version of TorchCodec. Refer to the version compatibility
# table:
# https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
# 3. Another runtime dependency; see exceptions below.
# The following exceptions were raised as we tried to load libtorchcodec:
#
# [start of libtorchcodec loading traceback]
# FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib
# Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib
# Referenced from: <BD3B44FC-E14B-3ABF-800F-BB54B6CCA3B1> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib
# Referenced from: <F06EBF8A-238C-3A96-BFBB-B34E0BBDABF0> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib
# Reason: no LC_RPATH's found
# FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib
# Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib
# Reason: no LC_RPATH's found
# ...
#
# This is strange because the the same error does not happen when using version 3.6.0 with datasets[audio].
#
# The same error appears in python3.12
## Imports
import numpy as np
from datasets import Dataset, Features, Audio, Value
## Parameters
NUM_WAVES = 128
SAMPLE_RATE = 16_000
DURATION = 1.0
## Input dataset arguments
freqs = np.linspace(100, 2000, NUM_WAVES).tolist()
ds = Dataset.from_dict({"frequency": freqs})
## Features for the final dataset
features = Features(
{"frequency": Value("float32"), "audio": Audio(sampling_rate=SAMPLE_RATE)}
)
## Generate audio sine waves and cast features
def gen_sine(example):
t = np.linspace(0, DURATION, int(SAMPLE_RATE * DURATION), endpoint=False)
wav = np.sin(2 * np.pi * example["frequency"] * t)
return {
"frequency": example["frequency"],
"audio": {"array": wav, "sampling_rate": SAMPLE_RATE},
}
ds = ds.map(gen_sine, features=features)
print(ds)
print(ds.features)
```
### Expected behavior
I expect the result of version `4.0.0` to be the same of that in version `3.6.0`. Switching the value
of the script above to `3.6.0` I get the following, expected, result:
```
$ uv run bug_report.py
Map: 100%|███████████████████████████████████████████████████████| 128/128 [00:00<00:00, 204.58 examples/s]
Dataset({
features: ['frequency', 'audio'],
num_rows: 128
})
{'frequency': Value(dtype='float32', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}
```
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-15.5-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- `huggingface_hub` version: 0.33.4
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7682/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7682/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7681
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7681/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7681/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7681/events
|
https://github.com/huggingface/datasets/issues/7681
| 3,227,112,736 |
I_kwDODunzps7AWdUg
| 7,681 |
Probabilistic High Memory Usage and Freeze on Python 3.10
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-minato",
"id": 82735346,
"login": "ryan-minato",
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-minato",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-14T01:57:16 | 2025-07-14T01:57:16 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, the process becomes unresponsive, cannot be forcefully terminated, and does not throw any exceptions.
I have attempted to mitigate this issue by setting `datasets.config.IN_MEMORY_MAX_SIZE`, but it had no effect. In fact, based on the document of `load_dataset`, I suspect that setting `IN_MEMORY_MAX_SIZE` might even have a counterproductive effect.
This bug is not consistently reproducible, but its occurrence rate significantly decreases or disappears entirely when upgrading Python to version 3.11 or higher. Therefore, this issue also serves to share a potential solution for others who might encounter similar problems.
### Steps to reproduce the bug
Due to the probabilistic nature of this bug, consistent reproduction cannot be guaranteed for every run. However, in my environment, processing large datasets like timm/imagenet-1k-wds(whether reading, casting, or mapping operations) almost certainly triggers the issue at some point.
The probability of the issue occurring drastically increases when num_proc is set to a value greater than 1 during operations.
When the issue occurs, my system logs repeatedly show the following warnings:
```
WARN: very high memory utilization: 57.74GiB / 57.74GiB (100 %)
WARN: container is unhealthy: triggered memory limits (OOM)
WARN: container is unhealthy: triggered memory limits (OOM)
WARN: container is unhealthy: triggered memory limits (OOM)
```
### Expected behavior
The dataset should be read and processed normally without memory exhaustion or freezing. If an unrecoverable error occurs, an appropriate exception should be raised.
I have found that upgrading Python to version 3.11 or above completely resolves this issue. On Python 3.11, when memory usage approaches 100%, it suddenly drops before slowly increasing again. I suspect this behavior is due to an expected memory management action, possibly involving writing to disk cache, which prevents the complete freeze observed in Python 3.10.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.33.4
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7681/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7681/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7680
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7680/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7680/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7680/events
|
https://github.com/huggingface/datasets/issues/7680
| 3,224,824,151 |
I_kwDODunzps7ANulX
| 7,680 |
Question about iterable dataset and streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4",
"events_url": "https://api.github.com/users/Tavish9/events{/privacy}",
"followers_url": "https://api.github.com/users/Tavish9/followers",
"following_url": "https://api.github.com/users/Tavish9/following{/other_user}",
"gists_url": "https://api.github.com/users/Tavish9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tavish9",
"id": 73541181,
"login": "Tavish9",
"node_id": "MDQ6VXNlcjczNTQxMTgx",
"organizations_url": "https://api.github.com/users/Tavish9/orgs",
"received_events_url": "https://api.github.com/users/Tavish9/received_events",
"repos_url": "https://api.github.com/users/Tavish9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tavish9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tavish9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tavish9",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"> If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n\nyes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\n> load_dataset(streaming=True) is useful for huge dataset, but the speed is slow. How to make it comparable to to_iterable_dataset without loading the whole dataset into RAM?\n\nYou can aim for saturating your bandwidth using a DataLoader with num_workers and prefetch_factor. The maximum speed will be your internet bandwidth (unless your CPU is a bottlenbeck for CPU operations like image decoding).",
"> > If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n> \n> yes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\nOkay, but `__getitem__` seems suitable for distributed settings. A distributed sampler would dispatch distinct indexes to each rank (rank0 got 0,1,2,3, rank1 got 4,5,6,7), however, if we make it `to_iterable_dataset`, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nWhat's your opinion here?",
"> however, if we make it to_iterable_dataset, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nActually if you specify `to_iterable_dataset(num_shards=world_size)` (or a factor of world_size) and use a `torch.utils.data.DataLoader` then each rank will get a subset of the data thanks to the sharding. E.g. rank0 gets 0,1,2,3 and rank1 gets 4,5,6,7.\n\nThis is because `datasets.IterableDataset` subclasses `torch.utils.data.IterableDataset` and is aware of the current rank.",
"Got it, very nice features `num_shards` 👍🏻 \n\nI would benchmark `to_iterable_dataset(num_shards=world_size)` against traditional map-style one in distributed settings in the near future."
] | 2025-07-12T04:48:30 | 2025-07-15T13:39:38 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset?
2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7680/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7680/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7679
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7679/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7679/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7679/events
|
https://github.com/huggingface/datasets/issues/7679
| 3,220,787,371 |
I_kwDODunzps6_-VCr
| 7,679 |
metric glue breaks with 4.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```",
"Thanks so much, @lhoestq!"
] | 2025-07-10T21:39:50 | 2025-07-11T17:42:01 | 2025-07-11T17:42:01 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks.
The code that fails is:
https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84
```
def simple_accuracy(preds, labels):
print(preds, labels)
print(f"{preds==labels}")
return float((preds == labels).mean())
```
data:
```
Column([1, 0, 0, 1, 1]) Column([1, 0, 0, 1, 0])
False
```
```
[rank0]: return float((preds == labels).mean())
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'bool' object has no attribute 'mean'
```
Some behavior has changed in this new major release of `datasets` and requires updating HF accelerate and perhaps the glue metric code, all belong to HF.
### Environment info
datasets=4.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7679/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7679/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7678
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7678/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7678/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7678/events
|
https://github.com/huggingface/datasets/issues/7678
| 3,218,625,544 |
I_kwDODunzps6_2FQI
| 7,678 |
To support decoding audio data, please install 'torchcodec'.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4",
"events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}",
"followers_url": "https://api.github.com/users/alpcansoydas/followers",
"following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}",
"gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alpcansoydas",
"id": 48163702,
"login": "alpcansoydas",
"node_id": "MDQ6VXNlcjQ4MTYzNzAy",
"organizations_url": "https://api.github.com/users/alpcansoydas/orgs",
"received_events_url": "https://api.github.com/users/alpcansoydas/received_events",
"repos_url": "https://api.github.com/users/alpcansoydas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alpcansoydas",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio",
"Same issues on Colab.\n\n> !pip install -U datasets[audio] \n\nThis works for me. Thanks."
] | 2025-07-10T09:43:13 | 2025-07-22T03:46:52 | 2025-07-11T05:05:42 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version.
!pip install -q -U datasets huggingface_hub fsspec
from datasets import load_dataset
downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train")
print(downloaded_dataset["audio"][0])
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[/tmp/ipython-input-4-90623240.py](https://localhost:8080/#) in <cell line: 0>()
----> 1 downloaded_dataset["audio"][0]
10 frames
[/usr/local/lib/python3.11/dist-packages/datasets/features/audio.py](https://localhost:8080/#) in decode_example(self, value, token_per_repo_id)
170 from ._torchcodec import AudioDecoder
171 else:
--> 172 raise ImportError("To support decoding audio data, please install 'torchcodec'.")
173
174 if not self.decode:
ImportError: To support decoding audio data, please install 'torchcodec'.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4",
"events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}",
"followers_url": "https://api.github.com/users/alpcansoydas/followers",
"following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}",
"gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alpcansoydas",
"id": 48163702,
"login": "alpcansoydas",
"node_id": "MDQ6VXNlcjQ4MTYzNzAy",
"organizations_url": "https://api.github.com/users/alpcansoydas/orgs",
"received_events_url": "https://api.github.com/users/alpcansoydas/received_events",
"repos_url": "https://api.github.com/users/alpcansoydas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alpcansoydas",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7678/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7678/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7677
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7677/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7677/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7677/events
|
https://github.com/huggingface/datasets/issues/7677
| 3,218,044,656 |
I_kwDODunzps6_z3bw
| 7,677 |
Toxicity fails with datasets 4.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4",
"events_url": "https://api.github.com/users/serena-ruan/events{/privacy}",
"followers_url": "https://api.github.com/users/serena-ruan/followers",
"following_url": "https://api.github.com/users/serena-ruan/following{/other_user}",
"gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/serena-ruan",
"id": 82044803,
"login": "serena-ruan",
"node_id": "MDQ6VXNlcjgyMDQ0ODAz",
"organizations_url": "https://api.github.com/users/serena-ruan/orgs",
"received_events_url": "https://api.github.com/users/serena-ruan/received_events",
"repos_url": "https://api.github.com/users/serena-ruan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/serena-ruan",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```",
"Thanks, verified evaluate 0.4.5 works!"
] | 2025-07-10T06:15:22 | 2025-07-11T04:40:59 | 2025-07-11T04:40:59 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).`
### Steps to reproduce the bug
Repro:
```
>>> toxicity.compute(predictions=["This is a response"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/evaluate/module.py", line 467, in compute
output = self._compute(**inputs, **compute_kwargs)
File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 135, in _compute
scores = toxicity(predictions, self.toxic_classifier, toxic_label)
File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 103, in toxicity
for pred_toxic in toxic_classifier(preds):
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 159, in __call__
result = super().__call__(*inputs, **kwargs)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1431, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1437, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 183, in preprocess
return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2867, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2927, in _call_one
raise ValueError(
ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
### Expected behavior
This works before 4.0.0 release
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.10.16
- `huggingface_hub` version: 0.33.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4",
"events_url": "https://api.github.com/users/serena-ruan/events{/privacy}",
"followers_url": "https://api.github.com/users/serena-ruan/followers",
"following_url": "https://api.github.com/users/serena-ruan/following{/other_user}",
"gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/serena-ruan",
"id": 82044803,
"login": "serena-ruan",
"node_id": "MDQ6VXNlcjgyMDQ0ODAz",
"organizations_url": "https://api.github.com/users/serena-ruan/orgs",
"received_events_url": "https://api.github.com/users/serena-ruan/received_events",
"repos_url": "https://api.github.com/users/serena-ruan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/serena-ruan",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7677/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7677/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7676
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7676/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7676/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7676/events
|
https://github.com/huggingface/datasets/issues/7676
| 3,216,857,559 |
I_kwDODunzps6_vVnX
| 7,676 |
Many things broken since the new 4.0.0 release
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4",
"events_url": "https://api.github.com/users/mobicham/events{/privacy}",
"followers_url": "https://api.github.com/users/mobicham/followers",
"following_url": "https://api.github.com/users/mobicham/following{/other_user}",
"gists_url": "https://api.github.com/users/mobicham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mobicham",
"id": 37179323,
"login": "mobicham",
"node_id": "MDQ6VXNlcjM3MTc5MzIz",
"organizations_url": "https://api.github.com/users/mobicham/orgs",
"received_events_url": "https://api.github.com/users/mobicham/received_events",
"repos_url": "https://api.github.com/users/mobicham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mobicham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mobicham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mobicham",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"Happy to take a look, do you have a list of impacted datasets ?",
"Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ",
"Hi @mobicham ,\n\nI was having the same issue `ValueError: Feature type 'List' not found` yesterday, when I tried to load my dataset using the `load_dataset()` function.\nBy updating to `4.0.0`, I don't see this error anymore.\n\np.s. I used `Sequence` in replace of list when building my dataset (see below)\n```\nfeatures = Features({\n ...\n \"objects\": Sequence({\n \"id\": Value(\"int64\"),\n \"bbox\": Sequence(Value(\"float32\"), length=4),\n \"category\": Value(\"string\")\n }),\n ...\n})\ndataset = Dataset.from_dict(data_dict)\ndataset = dataset.cast(features)\n\n``` \n",
"The issue comes from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train), [allenai/winogrande](https://huggingface.co/datasets/allenai/winogrande), [lukaemon/bbh](https://huggingface.co/datasets/lukaemon/bbh) and [Rowan/hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) which are all unsupported in `datasets` 4.0 since they are based on python scripts. Fortunately there are PRs to fix those datasets (I did some of them a year ago but dataset authors haven't merged yet... will have to ping people again about it and update here):\n\n- https://huggingface.co/datasets/hails/mmlu_no_train/discussions/2 merged ! ✅ \n- https://huggingface.co/datasets/allenai/winogrande/discussions/6 merged ! ✅ \n- https://huggingface.co/datasets/Rowan/hellaswag/discussions/7 merged ! ✅ \n- https://huggingface.co/datasets/lukaemon/bbh/discussions/2 merged ! ✅ ",
"Thank you very much @lhoestq , I will try next week 👍 ",
"I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.",
"This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?",
"> I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.\n\n`datasets` 4.0 can load datasets saved using any older version. But the other way around is not always true: if you save a dataset with `datasets` 4.0 it may use the new `List` type that requires 4.0 and raise `ValueError: Feature type 'List' not found.`\n\nHowever issues with lm eval harness seem to come from another issue: unsupported dataset scripts (see https://github.com/huggingface/datasets/issues/7676#issuecomment-3057550659)\n\n> This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?\n\nwhen reverting to an old `datasets` version I'd encourage you to clear your cache (by default it is located at `~/.cache/huggingface/datasets`) otherwise it might try to load a `List` type that didn't exist in old versions",
"All the impacted datasets in lm eval harness have been fixed thanks to the reactivity of dataset authors ! let me know if you encounter issues with other datasets :)",
"Hello folks, I have found `patrickvonplaten/librispeech_asr_dummy` to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?",
"https://huggingface.co/datasets/microsoft/prototypical-hai-collaborations seems to be impacted as well.\n\n```\n_temp = load_dataset(\"microsoft/prototypical-hai-collaborations\", \"wildchat1m_en3u-task_anns\")\n``` \nleads to \n`ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']`",
"`microsoft/prototypical-hai-collaborations` is not impacted, you can load it using both `datasets` 3.6 and 4.0. I also tried on colab to confirm.\n\nOne thing that could explain `ValueError: Feature type 'List' not found.` is maybe if you have loaded and cached this dataset with `datasets` 4.0 and then tried to reload it from cache using 3.6.0.\n\nEDIT: actually I tried and 3.6 can reload datasets cached with 4.0 so I'm not sure why you have this error. Which version of `datasets` are you using ?",
"> Hello folks, I have found patrickvonplaten/librispeech_asr_dummy to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?\n\nI guess you can use [hf-internal-testing/librispeech_asr_dummy](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy) instead of `patrickvonplaten/librispeech_asr_dummy`, or ask the dataset author to convert their dataset to Parquet"
] | 2025-07-09T18:59:50 | 2025-07-21T10:38:01 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness.
I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting:
``` Python
File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in generate_from_dict(obj)
1471 class_type = _FEATURE_TYPES.get(_type, None) or globals().get(_type, None)
1473 if class_type is None:
-> 1474 raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
1476 if class_type == LargeList:
1477 feature = obj.pop("feature")
ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
### Steps to reproduce the bug
``` Python
import lm_eval
model_eval = lm_eval.models.huggingface.HFLM(pretrained=model, tokenizer=tokenizer)
lm_eval.evaluator.simple_evaluate(model_eval, tasks=["winogrande"], num_fewshot=5, batch_size=1)
```
### Expected behavior
Older `datasets` versions should work just fine as before
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
| null |
{
"+1": 13,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 13,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7676/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7676/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7675
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7675/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7675/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7675/events
|
https://github.com/huggingface/datasets/issues/7675
| 3,216,699,094 |
I_kwDODunzps6_uu7W
| 7,675 |
common_voice_11_0.py failure in dataset library
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/98793855?v=4",
"events_url": "https://api.github.com/users/egegurel/events{/privacy}",
"followers_url": "https://api.github.com/users/egegurel/followers",
"following_url": "https://api.github.com/users/egegurel/following{/other_user}",
"gists_url": "https://api.github.com/users/egegurel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egegurel",
"id": 98793855,
"login": "egegurel",
"node_id": "U_kgDOBeN5fw",
"organizations_url": "https://api.github.com/users/egegurel/orgs",
"received_events_url": "https://api.github.com/users/egegurel/received_events",
"repos_url": "https://api.github.com/users/egegurel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egegurel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egegurel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egegurel",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussions, e.g. parquet.\n\nIn the meantime you can pin old versions of `datasets` like `datasets==3.6.0`",
"Thanks @lhoestq! I encountered the same issue and switching to an older version of `datasets` worked.",
">which version of datasets worked for you, I tried switching to 4.6.0 and also moved back for fsspec, but still facing issues for this.\n\n",
"Try datasets<=3.6.0",
"same issue "
] | 2025-07-09T17:47:59 | 2025-07-22T09:35:42 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I tried to download dataset but have got this error:
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 4
1 from datasets import load_dataset
----> 4 load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> 1392 builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> 1132 dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> 1031 raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...)
987 proxies=download_config.proxies,
988 )
--> 989 raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py
### Steps to reproduce the bug
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
### Expected behavior
its supposed to download this dataset.
### Environment info
Python 3.12 , Windows 11
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7675/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7675/timeline
| null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7674
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7674/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7674/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7674/events
|
https://github.com/huggingface/datasets/pull/7674
| 3,216,251,069 |
PR_kwDODunzps6eJGo5
| 7,674 |
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7674). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-09T15:01:25 | 2025-07-09T15:04:01 | 2025-07-09T15:01:33 |
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7674/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7674/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7674.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7674",
"merged_at": "2025-07-09T15:01:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7674.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7674"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7673
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7673/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7673/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7673/events
|
https://github.com/huggingface/datasets/pull/7673
| 3,216,075,633 |
PR_kwDODunzps6eIgj-
| 7,673 |
Release: 4.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-09T14:03:16 | 2025-07-09T14:36:19 | 2025-07-09T14:36:18 |
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7673/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7673/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7673.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7673",
"merged_at": "2025-07-09T14:36:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7673.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7673"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7672/events
|
https://github.com/huggingface/datasets/pull/7672
| 3,215,287,164 |
PR_kwDODunzps6eF1vj
| 7,672 |
Fix double sequence
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-09T09:53:39 | 2025-07-09T09:56:29 | 2025-07-09T09:56:28 |
MEMBER
| null | null | null |
```python
>>> Features({"a": Sequence(Sequence({"c": Value("int64")}))})
{'a': List({'c': List(Value('int64'))})}
```
instead of `{'a': {'c': List(List(Value('int64')))}}`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7672/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7672/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7672.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7672",
"merged_at": "2025-07-09T09:56:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7672.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7672"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7671/events
|
https://github.com/huggingface/datasets/issues/7671
| 3,213,223,886 |
I_kwDODunzps6_hefO
| 7,671 |
Mapping function not working if the first example is returned as None
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4",
"events_url": "https://api.github.com/users/dnaihao/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaihao/followers",
"following_url": "https://api.github.com/users/dnaihao/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaihao",
"id": 46325823,
"login": "dnaihao",
"node_id": "MDQ6VXNlcjQ2MzI1ODIz",
"organizations_url": "https://api.github.com/users/dnaihao/orgs",
"received_events_url": "https://api.github.com/users/dnaihao/received_events",
"repos_url": "https://api.github.com/users/dnaihao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaihao",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```",
"Realized this! Thanks a lot, I will close this issue then."
] | 2025-07-08T17:07:47 | 2025-07-09T12:30:32 | 2025-07-09T12:30:32 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length constraints, etc).
In this case, the writer would be a `None` type and the code will report `NoneType has no write function`.
A simple fix is available, simply change line 3652 from `if i == 0:` to `if writer is None:`
### Steps to reproduce the bug
Prepare a dataset
have this function
```
import datasets
def make_map_fn(split, max_prompt_tokens=3):
def process_fn(example, idx):
question = example['question']
reasoning_steps = example['reasoning_steps']
label = example['label']
answer_format = ""
for i in range(len(reasoning_steps)):
system_message = "Dummy"
all_steps_formatted = []
content = f"""Dummy"""
prompt = [
{"role": "system", "content": system_message},
{"role": "user", "content": content},
]
tokenized = tokenizer.apply_chat_template(prompt, return_tensors="pt", truncation=False)
if tokenized.shape[1] > max_prompt_tokens:
return None # skip overly long examples
data = {
"dummy": "dummy"
}
return data
return process_fn
...
# load your dataset
...
train = train.map(function=make_map_fn('train'), with_indices=True)
```
### Expected behavior
The dataset mapping shall behave even when the first example is filtered out.
### Environment info
I am using `datasets==3.6.0` but I have observed this issue in the github repo too: https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4",
"events_url": "https://api.github.com/users/dnaihao/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaihao/followers",
"following_url": "https://api.github.com/users/dnaihao/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaihao",
"id": 46325823,
"login": "dnaihao",
"node_id": "MDQ6VXNlcjQ2MzI1ODIz",
"organizations_url": "https://api.github.com/users/dnaihao/orgs",
"received_events_url": "https://api.github.com/users/dnaihao/received_events",
"repos_url": "https://api.github.com/users/dnaihao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaihao",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7671/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7671/timeline
| null |
completed
| null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7670
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7670/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7670/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7670/events
|
https://github.com/huggingface/datasets/pull/7670
| 3,208,962,372 |
PR_kwDODunzps6dwgOc
| 7,670 |
Fix audio bytes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7670). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-07T13:05:15 | 2025-07-07T13:07:47 | 2025-07-07T13:05:33 |
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7670/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7670/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7670.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7670",
"merged_at": "2025-07-07T13:05:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7670.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7670"
}
| true |
https://api.github.com/repos/huggingface/datasets/issues/7669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7669/events
|
https://github.com/huggingface/datasets/issues/7669
| 3,203,541,091 |
I_kwDODunzps6-8ihj
| 7,669 |
How can I add my custom data to huggingface datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/219205504?v=4",
"events_url": "https://api.github.com/users/xiagod/events{/privacy}",
"followers_url": "https://api.github.com/users/xiagod/followers",
"following_url": "https://api.github.com/users/xiagod/following{/other_user}",
"gists_url": "https://api.github.com/users/xiagod/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xiagod",
"id": 219205504,
"login": "xiagod",
"node_id": "U_kgDODRDPgA",
"organizations_url": "https://api.github.com/users/xiagod/orgs",
"received_events_url": "https://api.github.com/users/xiagod/received_events",
"repos_url": "https://api.github.com/users/xiagod/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xiagod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiagod/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xiagod",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
"Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"json\", data_files=\"my_file.json\")\n\n\nImages stored in folders (e.g. data/train/cat/, data/train/dog/):\nfrom datasets import load_dataset\ndataset = load_dataset(\"imagefolder\", data_dir=\"/path/to/pokemon\")\n\n\nThese methods let you quickly create a custom dataset without needing to write a full script.\n\nMore information can be found in Hugging Face's tutorial \"Create a dataset\" or \"Load\" documentation here: \n\nhttps://huggingface.co/docs/datasets/create_dataset \n\nhttps://huggingface.co/docs/datasets/loading#local-and-remote-files\n\n\n\nIf you want to submit your dataset to the Hugging Face Datasets GitHub repo so others can load it follow this guide: \n\nhttps://huggingface.co/docs/datasets/upload_dataset \n\n\n"
] | 2025-07-04T19:19:54 | 2025-07-05T18:19:37 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7669/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7669/timeline
| null | null | null | null | false |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 80